Category: Solutions

What Modern Technology Trends Will Impact Your Infrastructure in 2019?

What Modern Technology Trends Will Impact Your Infrastructure in 2019?

Things move quickly in the business world, particularly when it comes to technology. Strategically adopting emerging solutions can help an organization differentiate itself within the marketplace and provide the best experience possible, both internally for its staff members and within customer-facing systems.

However, as new solutions continue to roll out and gain buzz in the enterprise and tech industries, it’s important that decision-makers are aware of the top trends, as well as how these might potentially impact their infrastructure. Not having the right expertise or infrastructure support in place could cause a trending tech initiative to fail internally, or lack the ability to provide the type of ROI business leaders were hoping for. Considering the possible benefits alongside the potential infrastructure impacts of tech trends ahead of time is imperative ahead of implementing anything new.

Let’s take a look at some of the top trending technology solutions and approaches that will make waves in 2019, and how you can best prepare your infrastructure to support these types of emerging systems.

Connected clouds: ‘Multicloud’

While, in some cases, the cloud has been a part of enterprise infrastructure for years now, Forbes and CMO Network contributor Daniel Newman noted that next year, the lines will keep blurring between different cloud services. What’s been known as a hybrid cloud architecture will develop even further, and include connected public and private cloud environments to create a more multicloud configuration.

“Basically, what’s happening is that companies are realizing that going all public cloud, private cloud, or data center isn’t the best option,” Newman wrote. “Sometimes they need a mix of all or both. Thus, connected clouds are continuing to develop to meet companies’ changing needs – whether they want to cloud-source storage, networking, security, or app deployment.”

In this way, chances are good that businesses across nearly every industry sector will continue to move increasingly critical workloads to cloud environments. This will require robust partnerships between companies and their cloud providers, as well as robust expertise on the part of cloud vendors to ensure that rising cloud investments do provide value and benefits for the organization. In addition, it’s always important to have the proper visibility into capacity management and data protection, particularly within cloud environments.

Data abounds: Analytics, artificial intelligence and machine learning

In recent years, data has become a commodity and enterprises have been quick to capitalize on and analyze the sources they have on-hand. This trend will further develop next year as initiatives that center around data analysis continue to be a priority. This includes artificial intelligence and machine learning, particularly as additional use cases emerge.

In many cases, even organizations that are already participating in analytics and exploring AI and ML are only on the cusp of realizing the true value and advantages that data can provide, particularly for decision-making and overall strategy. There is considerable room for maturity within these initiatives, which we’ll continue to see in 2019 and beyond.

However, as with any type of data-driven initiative, storing and processing large volumes of information requires considerable infrastructure support, which must be coupled with the right capacity planning and performance management. IT teams must ensure that they have the necessary resources to enable these types of demanding activities.

What’s more, having the right data protection in place is imperative, especially as more regulatory standards require increased safeguarding of individuals’ sensitive details.

Privacy: A top priority

As more workloads are moved to the cloud, and businesses place a high priority on data and analysis, incorporating security and privacy must go hand-in-hand. As IT Toolbox contributor Neil Miller pointed out, 2018 became the year of data privacy with the European Union’s General Data Protection Regulation going into effect in May. Concerns surrounding data privacy won’t slow heading into 2019, and ensuring that the right safeguards and associated policies are in place will only become more important, and potentially more difficult.

In this way, it can be a great benefit for organizations to partner with an expert technology service provider that can support needs for cloud computing and consulting, as well as data protection and storage management. In addition, pairing this partnership with an infrastructure performance solution that can enable in-depth visibility across the entire infrastructure can be especially beneficial. To find out more, connect with The ATS Group today.

What can a year’s worth of IT data tell you?

A lot can happen in a year, especially when it comes to a business’s technology. Just consider your own internal infrastructure: Chances are good that the previous 12 months have seen at least one major initiative, be it the deployment of a new application or the migration of certain workloads to the cloud.

What’s more, as technology continues to advance and companies increasingly look to leverage more advanced solutions, their internal IT landscape will continue to change and adjust. According to a recent survey, experts are forecasting modest increases in business IT spending through next year, ZDNet reported.

Now consider that you have the power to look back at all the upgrades, usage patterns and other trends that took place within your infrastructure over the past year: What kind of insights could this information tell you?

Peak usage periods

One of the first elements that many IT teams analyze about their historical data are readily identifiable peak usage periods. While this kind of insight is often considered a low-hanging fruit, one can learn a lot about overall IT health by taking a look at periods of top usage.

For instance, although many companies expect usage peaks toward the end of the year, and especially during the holiday season, administrators may also find that usage increases in the early spring as well. In this way, IT stakeholders can ensure that adequate support is in place during these peak usage periods to prevent bottlenecks, slow performance and other issues.

Network server abstract What could a historical look at your infrastructure tell you?

How capacity was used

Stakeholders can also examine historical infrastructure data to get a sense of how their overall capacity was utilized in the past year, including the initiatives to which capacity was devoted and how those resources were put to work. This can provide cues for future initiatives, as IT admins can look back to see how much capacity was required for a similar project in the past, and better ensure the right resources are in place for upcoming initiatives.

The success – or snags – of migrations

It’s clear the the cloud isn’t going anywhere. In fact, organizations will only continue to consolidate workloads within internal data centers and migrate additional items to cloud environments. As enterprises increasingly work toward a multi-cloud strategy, being able to view, in-depth, the steps that were taken during past migrations can be invaluable to informing upcoming migrations.

Supporting future decision-making with historical infrastructure data

Being able to view, analyze and leverage a year’s worth of historical data, driven by an industry-leading infrastructure management solution like Galileo Performance Explorer, is critical in the current IT landscape.

“IT administrators can achieve a holistic view of their entire infrastructure, and use … analytics to make decisions related to compute, storage and network resources to avoid problems before they result in downtime or slower performance,” Dell EMC noted. “They can also use predictive analytics to understand resource utilization and when it may be time to upgrade portions of the infrastructure.”

To find out more about what you can glean from 12 months of historical infrastructure data – or even from a 30-day snapshot – connect with the experts at Galileo Performance Explorer today.

ATS Group Partners with Cobalt Iron for Innovative, Analytics-Based Data Protection and Precise Service Delivery

ATS Group Partners with Cobalt Iron for Innovative, Analytics-Based Data Protection and Precise Service Delivery

The ATS Group, an industry leader in traditional and next-gen infrastructure IT solutions, announced today an integrated and collaborative data protection service offering in close partnership with Cobalt Iron, utilizing their industry leading Adaptive Data Protection (ADP) solution for enterprise backup.

There are a multitude of challenges facing organizations today related to backing up and retrieving data. Environmental complexity has outgrown legacy infrastructure backup solutions. The data protection experts at the ATS Group have knowledge and experience to simplify operations and optimize the value of your IT investments. Whether it is compliance support, data disruption, analytics, or workloads in the cloud, ATS delivers the optimized, adaptive data protection solutions necessary for modern corporations.

ATS Group and Cobalt Iron both specialize in cloud, storage, and monitoring and both have strong ties to IBM. Together the two organizations bring together the best of both worlds: innovative data protection with impeccable service delivery. This united solution offers diverse industry experience and a commitment to accelerating the use of analytics and cloud resources to save money, simplify operations, and increase value.

Cobalt Iron ADP modernizes backup delivering the features and scale of enterprise data protection combined with the flexibility and economics of cloud consumption. ADP eliminates complexity, reduces management, scales easily from terabytes to exabytes, and provides the simplicity not found in backup technologies and tools today.

Key Features

Data protection services from the ATS Group are seamlessly delivered through cloud technology. Customers benefit from key data protection as a service (DPaaS) features including scalability enabled through expertly managed cloud environments to support even the most expansive collections of data assets as well as the flexibility of delivery methods. This resilience ensures that services and support can grow alongside an organization’s expanding needs and informational requirements.

ATS’ data protection solution provides organizations with streamlined data migration from multiple, disparate backup and protection platforms, creating a unified, modern protection service supported by advanced cloud technologies. This includes key analytics, virtualization, encryption, and multi-cloud support features to reduce complications and limitations for data management. The ATS Group’s data protection solution sets a new standard for assertive information protection and backup.

The ATS Advantage

Data protection services from the ATS Group enables several key advantages over other services:

  • Migration services: ADP puts an end to the long, painful migration processes of the past. By leveraging analytics and automation, enterprises can seamlessly migrate from the past to the future.
  • Implementation services: Based upon proven best practices, ATS delivers modern data protection in a fraction of the time required to deploy legacy products.
  • Integration services: ATS’ RESTful API library and integration services were designed to ensure existing automation and management systems are respected and leveraged. ATS tightly integrates with existing, defined management layers including ServiceNow, Active Directory, and Remedy.
  • Cost efficiency: Thanks to the flexibility, scalability, and seamless service delivery, the DPaaS solution represents a cost-effective way to ensure an organization’s most essential data assets are properly safeguarded. As a service offering, businesses simply need to maintain existing infrastructure assets without concerns to invest in additional hardware or software thus eliminating unplanned CAPEX.
  • Advanced analytics: Data-driven benchmarks and indicators power key capabilities including workload automation, monitoring with associated alerting, and self-healing. Best of all, users can gain insight into these processes through a policy-driven dashboard.
  • Exceptional management and delivery: Our data protection services enable organizations to put a team of experts in charge. IT managers can rest easy knowing these critical functions are being carried out with the utmost care from a vastly experienced team.

About the ATS Group

As new tech emerges offering business advantages, enterprises need support and expertise that will enable them to reap the benefits. Based near Philadelphia, the ATS Group offers agile services aligned with modern IT innovations, providing a critical competitive edge. For almost 20 years, our consultants have worked together to provide independent and objective technical advice, creative infrastructure consulting and managed support services for organizations of all sizes. Our specialists help clients store, protect, and manage their data, while optimizing performance and efficiency. The ATS Group specializes in server and storage system integration, containerized workloads, high performance computing (HPC), software defined infrastructure, DevOps, data protection and storage management, cloud consulting, infrastructure performance management and real-time monitoring for cloud, on-premises and hybrid solutions. The ATS Group supports solutions from today’s top IT vendors including IBM, VMware, Oracle, AWS, Microsoft, Cisco, Lenovo, Pure Storage and Red Hat. www.theatsgroup.com

Follow the ATS Group on LinkedIn and Twitter.

About Cobalt Iron

Cobalt Iron is the global leader in SaaS-based enterprise data protection. The company was founded in 2013 to fundamentally change the way the world thinks about data protection. Through analytics and automation, Cobalt Iron enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture. By leveraging the cloud, Cobalt Iron reduces overall capex by more than 50 percent while eliminating backup failures and inefficiencies. Processing more than 7 million jobs a month for customers in 44 countries, Cobalt Iron delivers modern enterprise data protection for enterprise customers.

Follow Cobalt Iron on LinkedIn and Twitter.

6 Steps to a Successful, Data-Driven Data Center Migration

6 Steps to a Successful, Data-Driven Data Center Migration

Data center migrations have been taking place more frequently over the past few years. Given the demands for reduced on-premise infrastructure, lower operating costs and boosted reliance on the cloud, this infrastructure exodus makes perfect sense.

That said, just because it’s an increasingly common occurrence doesn’t mean it’s a straightforward proposition. In fact, the upheaval that comes from handing off critical assets during a migration represents potential weak spots that can leave a business vulnerable.

Data center migrations: Success comes with a plan

As ServerCentral pointed out, over 60 percent of companies have delayed their migrations due to concerns about downtime, a lack of necessary resources and expertise. In addition, of those that delayed their migration, 20 percent noted that they did so because they did not have a plan in place to support the process.

Thankfully, a successful migration doesn’t have to be as complex or challenging as IT admins and business leaders might expect. With the right planning and best practices, organizations can better navigate their data center migration, and ensure they have the insights about their existing and new infrastructure environments to avoid downtime and support top-notch performance of key IT assets.

Back-side view of servers on a rack in a data center.Planning and the proper performance insight can support a successful data center migration.

6 steps to successfully navigating a data center migration

Let’s take a look at the critical steps to include in your data center migration plan, and how infrastructure performance monitoring enables a more streamlined migration process:

  1. Establish the scope of the migration: The first key step in any migration is identifying the scope of the project, including the workloads and applications that will be part of the transfer. While some initiatives will encompass the entirety of the applications and infrastructure supported by the data center, others may only bring in a few key apps and data sets.
  2. Monitor the data center infrastructure: This is where infrastructure performance monitoring and visibility comes into play. Admins should monitor the infrastructure and elements scoped out in the migration scope to get a full picture of usage patterns, key trends and other demand spikes that must be accounted for within the new infrastructure. Galileo Performance Explorer enables users to take a 30-day snapshot of their infrastructure, or to leverage historical data for as long as Galileo has been in place.
  3. Group assets to be migrated: Building upon the initially determined scope, stakeholders should look to virtually group together all the assets that will be migrated from the data center. Galileo enables users to utilize a Tag Manager function to easily identify and group servers and storage systems included in the migration.
  4. Determine baseline support and bandwidth requirements: Using an innovative infrastructure performance monitoring suite like Galileo, stakeholders should analyze metrics including IOPS, latency and throughput to glean an understanding of the support and bandwidth requirements that assets will require. This helps prevent downtime and other performance issues as items are moved from the data center to the new infrastructure environment.
  5. Leverage metrics to right-size the new environment: Based on trending and usage spikes noted during baseline analysis, stakeholders can ensure that the new infrastructure will have the necessary resources and support to enable performance.
  6. Monitoring during and after migration: A solution like Galileo allows users to monitor critical assets during and after the migration to support the most in-depth visibility and a streamlined migration process.

Migrations will continue to be an essential initiative for enterprises, but they must be supported with the right planning and infrastructure performance insights. To find out more, connect with the experts at the ATS Group today.

Understanding the Zabbix Roadmap & Upcoming 2018/2019 Releases

Understanding the Zabbix Roadmap & Upcoming 2018/2019 Releases

The ATS Group has extensive experience and expertise in Zabbix implementation services in the United States, pioneering enterprise level, real-time monitoring for organizations of all sizes. Our partnership with Zabbix enables us to execute projects of all sizes related to solution consultation, integration, implementation and support.

At the request of partners and customers, a development roadmap was recently published to keep the Zabbix community in the know of what’s to come. As of the date of this post (September 13, 2018), this page contains information about Zabbix 4.0 and 4.2.

Zabbix 4.0 – ETA: September 2018

Accessibility (Enterprise)
  • High contrast themes for Zabbix Web UI
  • Friendly UI for visually impaired people
  • Zabbix UI ready for assistive technologies
More flexible permissions (Enterprise)
  • Tag based permissions and alerting
  • Different problem view for different user groups
Advanced autoregistration (Enterprise)
  • Change role of already registered host
Dashboard and UI (Enterprise)
  • Kiosk mode for Dashboard and other pages
  • Easy to use time selector
  • Condensed view for problems
  • New graph widget with flexible selection of items
Interoperability (Enterprise)
  • Elastic as a backend database
  • Real-time export of collected data: history and events
Data collection (Enterprise)
  • Native support of data collection over HTTP/HTTPs
  • Monitoring of HTTP based APIs
  • Integration with HTTP based agents like Prometheus
Performance (Enterprise)
  • 20% performance improvements of Zabbix Server and Proxy
  • Faster processing and displaying of problems
Distributed monitoring (Enterprise)
  • Compression for server-proxy communications
  • Much faster data transfer
  • Lower requirements for network bandwidth
  • IP based validation for all proxy communications
Ease of use (User experience)
  • Manually execute item check
  • Mark mandatory fields in UI
  • Get rid of Monitoring->Triggers view
  • New filtering options for various views
Extreme flexibility (User experience)
  • Support of inventory macros in event tags
  • Control behaviour of item units
  • Macros for item preprocessing
Advanced work flow for problems (Enterprise)
  • Change problem severity manually
  • Optional message and operations when updating problem
  • Search problem by problem name
Tag based maintenance (Enterprise)
  • Problem tag level maintenance
  • Suppressing of problems by maintenance
Minor enhancements (General)
  • Improve database down message
  • New agent checks

Zabbix 4.2 – ETA: March 2019

Infrastructure (General)
  • Migration to Git
  • Community and partners to review proposals
  • Official builds for new platforms
Better workflow (Enterprise)
  • Guidelines for template creators
  • Versioning for templates
Data collection (Enterprise)
  • Validation and throttling rules for preprocessing
  • Discarding values and setting custom errors in preprocessing
  • Embedded language for extending preprocessing
  • Preprocessing for LLD rules including support of JSON and CSV
  • Out of the box support of Prometheus agents
Extreme flexibility (Enterprise)
  • New syntax for trigger expressions
  • Host and template level tags
  • Plugin system for dashboard widgets
  • Timeout on item level
Ease of use (User Experience)
  • Test actions and media types from UI
  • Test item preprocessing from UI
  • Merge calculated and aggregated items
  • Replace screens with dashboards
  • Multiple filters for Monitoring->Problems
Plugins (Enterprise)
  • Webhooks for actions
  • Webhooks on problem generation
  • Ability to extend context and top level menus
Discovery (Enterprise)
  • Use received data for hostname
Event processing (Enterprise)
  • Host and proxy level dependencies
Security and integrity (Enterprise)
  • API audit records for all operations
Scalability, Redundancy, HA and DM (Enterprise)
  • List of use cases
  • Research on possible solutions
Minor improvements (General)
  • Various improvements

Zabbix 4.4 has am ETA of September 2019, and  Zabbix 5.0 has an ETA of March 2020. NO additional data is available on those releases at this time.

Ready to get take a look?

Real-time monitoring and alerting services from the ATS Group and Zabbix can provide the necessary visibility to inform organizations of the condition of key infrastructure assets, as well as the steps required to keep them in optimal health. To find out more about how the infrastructure experts at the ATS Group can support your organization’s needs for a real-time monitoring solution, connect with us today.