How machine learning in data centers optimizes operations


Machine learning and artificial intelligence are popular topics for today’s IT professionals, but in the case of your organization’s data centers, they hold real promise.

Machine learning software actively predicts situations faster than you or your team might notice them, and maybe even solves them more quickly. These systems are a logical extension of today’s hybrid data center environment and are a growing part of data center infrastructure.

IDC predicts that 50% of IT assets in data centers will run autonomously using embedded AI functionality by 2022. Machine learning in data centers can optimize much of your overall operations, including planning and design, workloads, uptime and cost management.

Use cases for machine learning in data centers

Machine learning is capable of learning from scenarios and data sets, and it can formulate immediate reactions, instead of requiring human intervention or relying on a limited set of pre-programmed actions. The technology can help you learn more about your data center’s systems, manage them more efficiently and prevent unexpected downtime.

Creating more efficient data centers. Companies can use machine learning to autonomously manage their data center’s physical surroundings with real-time modifications done by the software to the physical facility and data center architecture instead of software alerts.

Google uses its AI systems to autonomously manage cooling at its data centers and continuously analyze 21 variables such as air temperature, power loads and internal air pressure. In 2018, the company decreased the energy required for cooling by 40% with the use of machine learning and reached a power usage effectiveness score of 1.06.  

Reducing risk in operations. Preventing downtime is a crucial activity for data center operations, and machine learning can help you predict and prevent it more easily. Data center machine learning software monitors real-time performance data from critical equipment — such as power management and cooling systems — and predicts when the hardware might fail. This allows you to perform preventative maintenance on these systems and prevent costly outages.

Machine-learning based risk analysis improves data center uptime by modeling different configurations that increase resiliency; identifying opportunities for preventative maintenance; and identifying potential cybersecurity risks before they manifest.

Decreasing customer churn with smart data. Companies may use machine learning in data centers to understand their customers better and potentially predict consumer behavior. As an extension of customer success programs, machine learning can analyze the mountains of information in data centers that goes unused after collection. 

In connecting machine learning software with a customer relationship management (CRM) system, the AI-powered data center could search for and retrieve data that are stored in a historical database that is traditionally not used for CRM, and allow the CRM system to formulate different strategies for lead generation or customer success.

Software options to start with

Because machine learning can act faster than any human, it can analyze terabytes of historical data and apply parameters to its decisions in fractions of seconds, which is useful when you’re tracking all activity in a data center. If you’re looking to implement machine learning into your data center, here are a few use cases and software offerings to start with.

Power and energy management. Energy management is one of the easiest areas for organizations to use machine learning in data centers and immediately see significant gains. Google’s use of DeepMind has consistently provided energy savings of around 30%, reducing associated costs. 

Maya HTT’s data center infrastructure management software, Datacenter Clarity LC, uses AI-powered tools to analyze individual servers to detect anomalies and opportunities for optimization.

For example, it can discover and reroute workloads from less efficient servers to more energy- and work-efficient servers that have lower utilization rates; all you see is a notice about replacing the older server and you can upgrade it before it becomes an issue. 

Log management. Most data center systems generate logs as they work, but they’re not useful to you if they just sit collecting dust once they’re generated. Add in any edge or peripheral devices your organization also uses, and that’s a lot of logs to sift through.

Machine learning can centralize and analyze the logs, and create easy-to-use reports that are valuable to your team. Open source technology such as Elasticsearch and paid options from Splunk can help analyze and ingest any data gathered by machine learning routines.

Root cause analysis. When you have any performance issue, you must be able to quickly identify the root cause and fix it.  Hewlett Packard Enterprise’s AI predictive engine in its InfoSight product provides tools to help identify and solve problems in near real time across the on-premises data center and cloud setups in your organization.

Based on specific parameters, InfoSight identifies affected users and develops its own set of solutions. But the real value is its preventative measures; once the software develops rules to address the problem, it then travels through the entire system, rerouting traffic to unaffected systems to stop them from inheriting the same issue.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com