Current Technology Trends- Blog

cloud savings

Enabling precision agriculture with IOT


FarmBeats is an AI for Earth Lighthouse project aimed at showcasing the benefits of IOT in a variety of applications. Water scarcity and pollution are threatening the livelihood of farmers around the world; they are under immense pressure to produce. Through sensors, drones, data analytics, AI, and connectivity solutions, Microsoft is enabling precision agriculture to improve yield while reducing resource consumption.

Azure enabling precision agriculture


FarmBeats is an AI for Earth Lighthouse project aimed at showcasing the benefits of Azure in a variety of applications. Water scarcity and pollution are threatening the livelihood of farmers around the world; they are under immense pressure to produce. Through sensors, drones, data analytics, AI, and connectivity solutions, Microsoft is enabling precision agriculture to improve yield while reducing resource consumption.

Edge Computing and the Cloud

edge-arch

image courtesy of openedgecomputing.org

This article is intended to be a simple introduction to what can be a complicated technical process. It usually helps to begin the articles I write involving a specific cloud technology with a definition. Edge computing’s definition, like many other technologies, has and evolved in a very short period of time. In the past Edge Computing could describe devices that connect to the local area network (LAN) to the wide area network (WAN), such as firewalls and routers. Today’s definitions of Edge Computing are more focused on the cloud and how to overcome some of the challenges of cloud computing. The definition I will use as a basis for this article is to bring computer and data storage resources as close as possible to the people and machines that require the data. Many times, this will include creating a hybrid environment for a distant or relatively slow public cloud solution. The hybrid environment will consist of an alternate location with resources that can provide the faster response required.

Benefits of Edge Computing

The primary benefit of living on the edge is increased performance. This is most often defined in the networking world as reduced latency. Latency is the time it takes for data packets to be stored or retrieved. With the growth of Machine Learning (ML), Machine to Machine Communications (M2M) and Artificial Intelligence (AI), exploding latency awareness has increased across the industry. A human working at a workstation can easily tolerate a data latency of 100-200 milliseconds (MS) without much frustration. If you’re a gamer you would like to see latency at 30 MS or less. Many machines and the applications they run are far less tolerant to data latency. The latency tolerance for machine-based applications can range from tolerating 10 MS to no latency, needing the data in real time. There are applications humans interface with that are more latency sensitive, a primary example being voice communications. In the past decade business’ demand for Voice Over Internet Protocol (VOIP) phone systems has grown which has in turn driven the need for better managed low latency networks. Although data transmission speeds are moving at the speed of light, distance still matters. As a result, we look to reduce latency for our applications by moving the data closer to the edge and its users. This can then produce the secondary benefit of reduced cost. The closer the data is to the applications the fewer network resources required to transmit the data.

Use Cases for Edge Computing

Content Delivery Networks (CDN) are thought of as the predecessor of the edge computing solutions of today. CDN’s are a geographically distributed network of content servers designed to deliver video or other web content to the end user. Edge Computing seeks to take this to the next step by delivering all types of data in even closer to real time.

The Internet of Things (IOT) devices is a large part of what is driving the demand for Edge Computing. A common application involves a video surveillance system an organization would use for security. A large amount of data is stored in which only a fraction is needed or will be accessed. An edge device or system collects all the data, stores it, and only transfers the data needed to a public cloud for authorized access.

Cellular networks and cell towers provide another use case for Edge Computing. Data for analysis are sent from subscriber phones to an edge system at the cell site. Some of this data is used immediately for call control and call processing. Most of the data, which is not time sensitive are then transmitted to the cloud for analysis later.

As the technology and acceptance for driverless cars increase a similar type of edge strategy will be used. However, with driverless applications, the edge devices will be located in the car because of the need for real time responses.

These examples all demonstrate the need for speed is constantly increasing and will continue to grow in our data applications.  As fast as our networks become there will always be the need to hasten processing time for our critical applications.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Common Cloud Terms Defined

cloud-computing-meaning

image courtesy of technologychronicle.com

Cloud and Cloud Computing has become a term whose definition has become so broad that it is nearly irrelevant. Adding to this confusion for many business users and cloud pundits is they are forced to live with consumer-based cloud definitions. Defining from the negative perspective is not preferred, but the business cloud is not Gmail, or Google Drive, or the broad definition of “using someone else’s computer”. For our purposes here, the term cloud is a businesses strategy to move or create one’s IT infrastructure in a virtualized stack on premise or in a secure data center. The definition above, as well as the ones to follow, are not presented as fact but as an educated opinion. These opinions are based on a career of over 25 years of experience in telecommunications and IT infrastructure.  To follow are the most common popular business cloud terms defined:

Private Cloud– Private cloud is a virtualized stack of private IT infrastructure where resources are used by a single organization or tenant. Many times, it is thought of as an on-premise solution, although many cloud providers offer private cloud off-site in the multi-tenant data center. It is popular with regulated industries like healthcare where shared storage or ram is not recommended and could violate regulatory best practices. (i.e.HIPAA)

Public Cloud– Public cloud is an IT infrastructure from a service provider who sells or rents virtual machines (VMs) from a large pool of managed resources. The resources managed that create the infrastructure or VMs are processors or compute, ram and storage. Most large public cloud providers work from a self-service model providing a portal from which their users have free reign to create all the IT infrastructure they need without assistance. This creates easy procurement but can cause challenges for management, billing and cost control.

Hybrid Cloud– A hybrid cloud is a combination of public cloud and private clouds that work together or integrate with each other to create a single solution. A common example of hybrid cloud is an on-premise private cloud that is replicated or backed up to VMs in a public cloud for disaster recovery (DR) or business continuity.

Multi Cloud – A multi cloud solution is created when an organization uses multiple public cloud providers. This practice is not uncommon, but it is not currently an integrated solution like a hybrid cloud. It can be difficult to manage; I haven’t seen a management product that offers a simple single pane of glass management of a multi cloud solution. To this point, it’s a good idea more than a valid business solution. Organizations desire multi cloud as it creates even greater redundancy and eliminates a single point of failure.

Containers – Containers are more recently developed virtualized server technology similar to a VM but with distinct differences. Containers are designed for a single application, unlike VMs that often host multiple applications. They create isolation at the application layer instead of isolation at the server layer. There is no guest operating system on containers such as Windows or Linux as there is on a VM. The primary benefit of a container is that if something breaks it only affects that application, not an entire server. Containers popularity and acceptance have grown as other benefits have emerged. Increased portability between service providers and enhanced security capabilities have allowed container technology to thrive.

These are my definitions from my experiences in IT and the data center. I welcome any feedback or differing perspectives from my readers. There are many more terms to define, stay tuned for follow up articles similar this soon.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

 

Active Directory (AD) in the Cloud

 

active-directory-cloud

Most business user’s first experience with the cloud is through a Software as a Service (SaaS) like Microsoft Office 365. Many times, as the business cloud presence grows, and their infrastructure becomes more balanced between cloud and on-premise, new challenges emerge. A common challenge of this integration is synchronizing a directory of users on all parts of the network.  Microsoft manages the directory of users with Active Directory (AD). Most Microsoft-based networks have had an AD server on the premise that manages user identification and authentication. As connectivity outside the premise has increased through e-commerce and cloud computing, new technologies have been developed for the AD. Office 365 users, whether they realize it or not, use the AD system within Microsoft Azure. Microsoft offers this same cloud-based AD system to all Azure users as Azure AD. Before detailing a cloud-based AD strategy I will briefly review the benefits of an AD system on the corporate network.

1.      Identify the Environment- AD creates a central identification and authentication across all platforms and locations of the corporate network.

2.      Enable Users- AD enables users to have more of a self-service experience, less dependent on corporate IT resources. Users can also receive benefits like single sign-on (SSO) for logging on to multiple applications or services.

3.      Protecting Corporate Data- Authentication is the most basic form of network security. This can verify users on a network like a passport verifies travelers entering a visiting country.

All public cloud providers provide different forms of AD, in this article I will focus on Microsoft Azure. Most administrators that consider Azure AD are concerned that it will create another complicated layer on top of the premise AD server. Actually, the opposite is true; it can offer a kind of “AD lite”. It will assist by breaking down a user’s identification into a simple field such as name, tenant, role, and password.

 The same Microsoft Azure AD that is used as the directory for Office 365 is free to Azure users. However, there are premium tiers that offer additional functionality.  These premium levels can offer value added features such as company branding and self-service features for users such as password reset.

By storing a business’s directory services and authentication in the public or private cloud, a business creates a secure and always available directory service. The Azure AD is completely scalable and can be integrated with other services through APIs and web based protocols. This will also allow integrations with on premise AD servers and allow single sign-on for all applications. The Azure AD can be thought of as an identification as a service.

Azure AD services can be managed directly on the Azure portal for simple configurations. More sophisticated deployments may be managed by common tools such as AD Domain Services (AD DS) Lightweight Directory Access Protocol (LDAP), and Active Directory Federation Services (AD FS).

Office 365 directory examples can provide a basic outline of how these services work. Their directories can be identified three ways: cloud only, synchronized identity, and federated identity. Cloud only is created within the Office 365 admin portal and managed behind the scenes through Azure AD. Synchronized identity accounts are created on premise with an AD server where passwords are kept in sync with the cloud. The synchronized identity method uses the cloud as its ultimate basis for the directory.  The federated identity is more complex. Users are based in the on-premise directory and kept in sync with the cloud however, ultimate verification is retained by the on-premise AD services.

Cloud services benefit most IT infrastructure environments although they may also create complications.  Employing a synchronize directory for all users and applications on and off the premise creates a stable foundation to identify and protect all users and data on the corporate network.

 

If you would like to talk more about strategies to migrate data to the cloud contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Eliminating Cloud Waste

cloud-wasteimage courtesy of parkmycloud.com

In the beginning when the cloud was created, it was presented and sold with cost savings being a primary benefit. Unfortunately, most businesses that have implemented a cloud solution today have not realized those savings.  However, cloud adoption has flourished because of its many other benefits. Agility, Business Continuity and, yes, even IT Security have enabled cloud to be an integral part of the business IT infrastructure. While businesses have been able to forego cost savings of their new IT infrastructure, they won’t tolerate a significant cost increase, particularly if it comes in the form of wasted resources. Today a primary trend in cloud computing is to regain control of cloud cost and eliminate cloud waste.

Ironically the clouds beginnings, virtualization, were based on reducing unused resources. Virtualization software, the hypervisor, created logical server instances that could share fixed resources and provide better overall utilization. As this virtualized IT stack started to move out of the local datacenter (DC) and into the public cloud environments some of this enhanced utilization was lost. Cloud pricing became hard to equate with the IT infrastructure of the past. Cloud was CapEx instead of OpEx and was billed in unusual increments of pennies per hour of usage. Astute financially minded managers began to think in terms of questions like “how many hours are in a month, and how many servers do we need?” Then the first bills arrived and the answers to those questions and the public clouds problematic cost structure became clearer. They quickly realized the need to track cloud resources closely to keep costs in line.

Pricing from the cloud provider come in three distinct categories: storage, compute and the transactions the server experiences. Storage and bandwidth (part of traffic or transactioins)  have become commodities and are very inexpensive. Compute is by far the most expensive resource and therefore the best to optimize to reduce cost. Most public cloud customers see compute be about 40% of their entire invoice. As customers look to reduce their cloud cost, they should begin with the migration process. The migration process is important, particularly in a lift and shift migration (see https://twoearsonemouth.net/2018/04/17/the-aws-vmware-partnership for lift and shift). Best practices require the migration to be completed entirely before the infrastructure is optimized. A detailed monitoring needs to be kept on all instances for unusual spikes in activity or increased billing. Additionally, making sure all temporary instances, especially another availability zone, are shut down as their needs are completed.

 In addition to monitoring and good documentation, the following are the most common tools to reduce computing costs and cloud waste. 

1.       Reserved Instances– All public cloud providers offer an option called reserved instances in order to reduce cost. A reserve instance is a reservation of cloud resources and capacity for either a one or a three-year term in a specific availability zone. The commitment for one or three years allows the provider to offer significant discounts, sometimes as much as 75%. The downside is you are removing one of the biggest advantages of cloud, on demand resources. However, many organizations tolerate the commitment as they have become used to it in previous procurements of physical IT infrastructure.

2.       Shutting Down Non-Production Applications– While cloud was designed to eliminate waste and better utilize resources it is not uncommon for server instances to sit idle for long periods. Although the instances are idle, the billing for it is not. To alleviate the cost of paying for idle resources organizations have looked to shut down non-production applications temporarily. This may be at night or on weekends when usage is very low. Shutting down servers runs chills through operational IT engineers (OPS) as it can be a complicated process. OPS managers don’t worry about shutting down applications on servers, rather starting them back up. It is a process that needs to be planned and watched closely. Many times, servers depend on other servers to be running at boot up properly. Run books are often created to document the steps of the process to shut down and restart of server instances. For any servers to be considered for this process it will be non-production and have tolerance for downtimes. There are Software as a Service (SaaS) applications that help manage the process. They will help determine the appropriate servers to consider for shut down as well as manage the entire process. Shutting down servers to avoid idle costs can be complicated but with the right process or partner significant savings in cloud deployments can be realized.

Cloud adoption has changed many aspects of the enterprise IT, not the least of which is how IT is budgeted. Costs that were once fixed CapEx are now OpEx and fluid. IT budgets need to be realigned to accommodate this change. I have seen organizations that haven’t adjusted and as a result have little flexibility in their operational budgets. However, they may still have a large expense account with a corporate credit card. This has allowed desperate managers in any department to use their expense account to enable public cloud service instances and bypass the budget process. These ad hoc servers created quickly and out of the process,  are the type that becomes forgotten and create ongoing billing problems.

While much of the cloud technology has reached widespread acceptance in business operations’ cost analysis, cloud waste management and budgeting can fall behind in some organizations. Don’t let this happen to your business contact us for a tailor-made cloud solution.

 

If you would like to talk more about strategies to eliminate cloud waste for your business contact us at:        Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Mentoring in Business

Mentorimage courtesy of indianceo.in

I have always valued the act of mentoring and believed it has an important role in business, particularly in the business of technology. Recently I was listening to a business podcast on the subject and it inspired me to write about it. I have always used mentors and I currently have three in my life; in addition, I also am a mentor. (see https://twoearsonemouth.net/2017/10/05/i-found-giving-back-can-provide-you-more) Mentoring is not binary, I believe you can and should be both a teacher and a student in this process. I also believe that a common practice can be developed across all of IT that will benefit the industry and its customers.

I remember the impact of my first business mentor. Early in my sales career, I was insecure and concerned that I didn’t know everything about the technical product I was selling. When I told my boss and mentor about how I felt he responded with the simple affirmation, “you know much more about the technology than your prospect does.” It reassured me, and it has helped me throughout my career as I have had similar insecurities. I think this anecdote is relevant to the act of mentorship, you don’t have to be an expert, just know something of what you teach. However, mentoring is business is more than teaching about technology. It needs to be a defined process that the student understands from the beginning. It should include not only instruction on technology but additionally information of the culture and the politics of the organization where they work. Instructions on how to act and work within the bureaucracy and processes of the company are vital. This type of information can’t be taught in school and eliminates hours of wasted time for the new employee to figure out these details for themselves.

Many believe that a large part of today’s IT careers is a trade rather than a science, not requiring a college education. IT jobs often depend on certifications (certs) that are developed and maintained primarily by the largest IT vendors like Microsoft and Cisco. These certs are developed around new and developing technologies, ignoring some of the more fundamental technologies. Many newcomers to the technology workforce want careers in software development, creating applications such as those that run on their smartphones. IT infrastructure, my focus of expertise, and a far less sexy technology is still required to support these applications. Infrastructure is an example of a technology that may be best passed down through mentorship.

If an (IT) community-based mentorship program can be developed in technology, it could eliminate the challenges of the large vendors running the IT education process. Ideally, a system could be developed such as was utilized for centuries, the concept of a master and an apprentice. Masters, or experts in a trade, were paid to pass their knowledge on to the younger apprentice. For this process to succeed it needs to be started and supported by the hiring companies within IT. Established employees should be compensated for mentoring and expected to teach new employees the many aspects of their job. As these programs become more widespread, an education process for the trade of IT can be developed and maintained where it should be, the IT community.

Mentorship is an art that has been looked over because of today’s requirements and expectations of a college education. I believe mentoring has tremendous benefits and will produce a better and more rounded education for new entrants to the field of IT.

 

The AWS & VMware Partnership

VMware.AWS-image

image courtesy of eweek.com

In the world of technology, partnerships are vital as no provider does everything well. Some partnerships appear successful at first glance, but others require more of a wait and see approach. When I first heard that VMware and Amazon Web Services (AWS) were forming a partnership I felt I wanted a better explanation as to how it would work before deciding on its merits. My cynicism was primarily founded in VMware’s previous attempts to play the public cloud market such as the failed vCloud Air. After learning more, I’m still not convinced it will work but the more I understand, the more sense it makes.

It can be said that VMware invented the cloud through its pioneering of the technology of virtualization. It allowed the enterprise in the 1990’s to spend less money on IT hardware and infrastructure. They taught users how to build and add to an IT infrastructure in minutes rather than weeks. They taught us how to make IT departments to be agile. In a similar way, it seemed that AWS has built an enormous and rapidly growing industry from nothing. It had the foresight to take their excess IT infrastructure and sell it, or more precisely rent it. This excess infrastructure had the ability to be rented because it was built on their flavor of virtualization. For these two to join forces does make sense. Many businesses have built their virtualized IT infrastructure, or cloud, with the VMware hypervisor. This can be on the premises, in another data center or both. With the trend for corporate IT infrastructure to migrate off-site, the business is left with a decision. Should they take a “lift and shift” strategy to migrate data off site or should they redesign their applications for a native cloud environment? The lift and shift strategy refers to moving an application or operation from one environment to another without redesigning the application. When a business has invested in VMware and management has decided to move infrastructure off site, a lift and shift strategy makes sense.

To follow is a more detailed look at a couple of the advantages of this partnership and why it makes sense to work with VMware and AWS together.

Operational Benefits

With VMware Cloud on AWS, an organization that is familiar with VMware can create a simple and consistent operational strategy of their Multi-cloud environment. VMware’s feature sets and tools for compute (vSphere), storage (vSAN) and networking (NSX) can all be utilized. There is no need to change VMware provisioning, storage, and lifecycle policies. This means you can easily move applications between their on-premises environments and AWS without having to purchase any new hardware, rewrite applications, or modify your operations. Utilizing features like vMotion and VMware Site Recovery Manager have been optimized for AWS allowing users migrate and protect critical applications at all their sites.

Scalability and Global Reach

Using the vCenter web client and VMware‘s unique features like vMotion enhance AWS. AWS’s inherent benefits of unlimited scale and multiple Availability Zones (AZ) fit hand in glove with VMware’s cloud management. A primary example is an East Coast enterprise opening a West Coast office. The AWS cloud will allow a user to create infrastructure on the AZ West Coast on demand in minutes. VMware’s vCenter web client will allow management of the new site as well as the existing primary infrastructure from a single pane of glass. This example displays not only how the enterprise can take advantage of the benefits of this partnership but also that the partnership will appeal to the needs of a larger enterprise.

The benefit above, as with the solution in total, is based on the foundation of an existing VMware infrastructure. This article has just touched on a couple of the advantages of the VMware AWS partnership, there are many. It may be noted that cost is not one of them. This shouldn’t surprise many IT professionals as large public cloud offerings don’t typically reduce cost. Likewise, VMware has never been known as an inexpensive hypervisor. The enterprise may realize soft cost reduction by removing much of the complexity, risk, and time associated with moving to the hybrid cloud.

Both AWS and VMware are leaders in their categories and are here to stay. Whether this partnership survives or flourishes, however, only time will tell.

If you would like to learn more about a multi-cloud strategy for your business contact us at: Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

 

 

Getting Started with Microsoft Azure

 

azure-icon-250x250

image courtesy of Microsoft.com

A few months ago I wrote an article on getting started with Amazon Web Services (AWS): now I wanted to follow-up by writing the same about  getting started with Microsoft Azure. Microsoft Azure is the public cloud offering deployed through Microsoft’s global network of datacenters. Azure has continued to gain market share from its chief rival, AWS. Being in second place is not something Microsoft is used to with their offerings. However, in the cloud, like with internet web browsers, Microsoft got off to a slow start. Capturing market share will not prove as simple with AWS as it was with Netscape and the web browser market in the 90’s but in the last two years, progress has been made. Much of the progress can be attributed to Satya Nadella, Microsoft’s current CEO.  Nadella proclaimed from his start a commitment to the cloud. Most recently Microsoft has expressed their commitment to support Linux and other operating systems (OS) within Azure. Embracing another OS and open source projects is new for Microsoft and seems to be paying off.

Like the other large public cloud providers, Microsoft has an easy to use self-service portal for Azure that can make it simple to get started. In addition to the portal, Microsoft entices small and new users with a free month of service. The second version of the portal released last year has improved the user experience greatly. Their library of pre-configured cloud instances is one of the best in the market. A portal user can select a preconfigured group of servers that would create a complex solution like SharePoint. The SharePoint instance includes all the components required: The Windows Server, SQL Server and SharePoint Server. What would take hours previously now can be “spun-up” in the cloud with a few clicks of your mouse. There are dozens of pre-configured solutions such as this SharePoint example. The greatest advantage Microsoft has over its cloud rivals is it has a deep and long-established channel of partners and providers. These partners, and the channel Microsoft developed for their legacy products, allow them to provide the best support of all the public cloud offerings.

Considerations for Getting Started with Microsoft Azure

Decide the type of workload

It is very important to decide not only what workloads can go to the cloud but also what applications you want to start with. Start with non-production applications that are non-critical to the business.

Define your goals and budget

Think about what you want to achieve with your migration to the cloud. Cost savings? Transferring IT from the capital expense to an operational expense? Be sure you calculate your budget for your cloud instance; Azure has a great tool for cost estimation. In addition, make sure you check costs as you go. The cloud has developed a reputation for starting out with low-costs and increasing them quickly.

Determine your user identity strategy

Most IT professionals are familiar with Microsoft Active Directory (AD). This is Microsoft’s application that authenticates users to the network behind the corporate firewall. AD has become somewhat outdated, not only by cloud’s off-site applications but also by today’s limitless mobile devices. Today, Microsoft offers Azure Active Directory (AAD). AAD is designed for the cloud and works across platforms. At first, you may implement a hybrid approach between AD, AAD and Office 365 users. You can start this hybrid approach through a synchronization of these two authentication technologies. At some point, you may need to add on federation that will add additional connectivity to other applications such as commonly used SaaS applications.

Security

An authentications strategy is a start for security but additional work will need to be done. A future article will detail cloud security best practices in more detail. While it is always best to have a security expert to recommend a security solution, there are some general best practices we can mention here. Try to use virtual machine appliances whenever possible. The virtual firewall, intrusion detection, and antivirus devices add another level of security without adding additional hardware. Devices such as these can be found in the Azure marketplace. Use dedicated links for connectivity if possible. These will incur a greater expense but will eliminate threats from the open Internet. Disable remote desktop and secure shell access to virtual machines. These protocols exist to offer easier access to manage virtual machines over the internet. After you disable these try to use point to point or site to site Virtual Private Networks (VPN‘s). Finally, encrypt all data at rest in virtual machines to help secure data.

Practically every business can find applications to migrate to a public cloud infrastructure such as Azure. Very few businesses put their entire IT infrastructure in a public cloud environment. A sound cloud strategy, and determining which applications to migrate enables the enterprise to get the most from a public cloud vendor.

If you would like to learn more about Azure and a cloud strategy for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net