Elastic Stack 7.6 delivers automated threat analysis and response
Elastic Stack 7.6 streamlines automated threat detection with the launch of a new SIEM detection engine and a curated set of detection rules aligned to the MITRE ATT&CK knowledge base, brings performance improvements to Elasticsearch, makes supervised machine learning more turnkey with inference-on-ingest features, and deepens cloud observability and security with the launch of new data integrations.
Elasticsearch gets faster
Elastic has improved the performance of queries that are sorted by date or other long values by applying the block-max WAND optimization to sorted queries — a clever way to stop counting new results when they’re clearly not going to change the results. This is the same Block-Max WAND that made top-k hits queries faster in 7.0.
Sorting on time is one of the most common tasks in observability and security use cases. Chasing down an error in the Elastic Logs app or investigating a threat in Discover are just a few of the many things that will be faster by simply upgrading to 7.6.
Supervised machine learning is now a native part of the Elastic Stack
Elastic’s goal with machine learning in the Elastic Stack has always been to make it so easy that anyone in an organization can use it. With the first release in 5.4, Elastic has made detecting anomalies as easy as building a visualization in Kibana — making this accessible to a broader audience and making data science teams even more efficient.
With 7.6, Elastic brings end-to-end supervised machine learning capabilities to the stack, from training a model to using the model for inference at ingest time. The goal is to make supervised machine learning methods like classification and regression in Elasticsearch even more turnkey for practitioners across observability, security, and enterprise search use cases. For instance, a security analyst can now build a bot detection model using classification and then use the new inference ingest processor to infer and label new traffic as a bot (or not a bot) at ingest time — all natively within Elasticsearch.
As with unsupervised learning and anomaly detection, the goal here is to make supervised machine learning easy and accessible to everyone. So, instead of building a generic data science toolkit or providing integration to external machine learning libraries that require users to cobble together and maintain complex workflows that move data across multiple tools, Elastic has focused on simplifying common use cases. With this approach, Elastic is unlocking new use cases and keeping the operational side of things simple.
Elastic is also including a language identification model that can be used in the inference ingest processor to label the language on documents at ingest time. Language identification is key to so many use cases. For example, a support center can use this feature to route an incoming question to the right support agent or support location based on the language, and you can use it to make sure incoming text is indexed properly in Elasticsearch.
“As the team responsible for the Wi-Fi subway network on public transit systems in New York City and Toronto, we are acutely aware of the need to detect system issues and connectivity anomalies. This ensures we can provide quality connections for millions of daily commuters. In 2017, we turned to anomaly detection powered by unsupervised machine learning from Elastic to detect issues that may have been otherwise missed in real time, minimizing impact on network performance,” said Jeremy Foran, Technology Specialist at BAI Communications. “As we look to the future and the onboarding of more transit systems across the world, we will continue to leverage the supervised machine learning features in Elastic Stack 7.6 to bring new networks online.”