Newsletter 7.0.5: New version and enhanced reporting

Welcome.

New version has arrived! Energy Logserver is currently at version 7.0.5. We've planned this update to be smaller one, with mainly optimization and stabilization changes. But it turned out that we’ve added significant features on which we were working for some time. Below are some of highlights.

 

Major changes

SUNBURST IOC

We’ve added SUNBURST IOC detection to Energy Logserver. SUNBURST is serious threat that can be used to steal sensitive information from organization and even take control of infected systems. This attack is very serious and it is worth to monitor it’s activity. Energy Logserver has been equipped with predefined alerts that can be used to detect SUNBURST. You can read more here https://energylogserver.pl/en/sunburst-detection/

Alert through syslog

Feature has been added to Alert UI. You can select alerting method as syslog message. This is very useful for integrating Energy Logserver with many different solutions, which are capable of receiving data or signal through syslog. You can select host, port, protocol, severity and facility of this notification method.

National character support

Energy Logserver has now full national character support. This makes it completely ready for any custom data from uncommon sources. Every module of Energy Logserver is prepared to process and function with 100% efficiency with such data.

Reporting enhanced

Reports and exports has been enhanced with additional information. Now you can see who created report or export task and when. Report scheduler has been optimized and moved to the same tab as report creator. With this update you have even more control and information about Energy Logserver users. This is next step in our direction to make Energy Logserver user management be as useful and easy as possible.

 

We’ve also tackled one of most confusing topics in elasticsearch: Keyword and Text string types – what is the difference.

Feel free to read short article here: https://energylogserver.pl/en/text-vs-keyword/

If you find that interesting, then few words about optimization should be something you want to know 🙂

Read more on our blog: https://energylogserver.pl/en/few-words-about-optimization/

 

Stay safe and happy searching!

Energy Logserver team

SUNBURST detection

SUNBURST is a threat that uses SolarWinds software. The threat accesses the infrastructure via a fake software update. The malware was constructed so well that it went undetected for a long time. He disguises his communications very cleverly, pretending to be real connections. It even uses country-specific IP addresses to avoid being recognized as anomaly.

What is the risk? First of all, this calculated attack is aimed at transferring specific information, user logins and passwords, personal data, but also - insight into the secrets of the organization and intellectual property such as technologies and designs. Finally, SUNBURST allows unauthorized persons to take control of the system. The threat posed by this vulnerability is very serious.

At Energy Logserver, we have collected a large amount of information about this threat, and based on it, we have prepared a set of rules that allow us to detect the presence of SUNBURST in our environment.

While SUNBURST activity is well masked, it is not undetectable. The Threat Intelligence system built into Energy Logserver, when configured is able to easily observe traces of this vulnerability. After adding at least two monitoring objects, Agents can send key information that will help identify the threat.

Objects for monitoring can be as basic as the Windows Defender subsystem and TaskScheduler. Both Windows Defender and TaskScheduler can spot attempts specific to SUNBURST activity.

In addition, our Threat Intelligence database contains over 2,100 characteristic objects that are identified with SUNBURST activity or SolarWinds related malware. Their types concern, among others hashes of files, IP addresses, domains, etc.

Energy Logserver can be enriched with a package of predefined alerts related to SUNBURST detection. Additionally, using the methods available in Energy Logserver, we are able to improve parsers in order to precisely detect malware activity in SolarWinds environments.

Few words about optimization

While working with elasticsearch it’s impossible to avoid the topic of shard optimization. Sooner or later every user of a more complex system will have to take up this topic. An unoptimized elasticsearch environment can result in slow data indexing, returning responses, and even unstable performance of the entire environment.

The sooner we understand where this problem comes from and the sooner we address it, the better. Planning elasticsearch shard policies is essential to ensure a long-lasting and stable cluster performance. It is also worth remembering that shards are nothing else than the Lucene engine on the basis of which elasticsearch was created.

 

What is shard?

Index is built with shards and we divide them into primary and replica. Each shard holds some part of the data stored in the index, so the set of primary shards in the index will act as RAID 0. Additionally, each primary shard can have its 1:1 replica. This guarantees data availability in the event of a cluster failure. If any primary shard becomes unavailable, its replica takes his place.

 

Disks

Often times, the elasticsearch environments that receive the data are not properly managed in terms of space. This means that the data released into the environment remain there for a long time, somehow forgotten. Indexing large amounts of data without proper management can quickly consume even huge disk resources. The reason for this is that the replicas are 1: 1 copies of the primary shards.

Holding an index (e.g. 100GB) of 4 shards and each of them has its own three replicas means that we have a total of 15 shards (4 primary + 12 replicas) and 400 GB of data. It is not hard to see how inattention can lead to a quick full disk in such a scenario.

Optimization consists in categorizing data and assigning them an appropriate number of shards and replicas. Of course, each replica taken means a greater susceptibility to permanent data loss in the event of a failure. It is known, however, that not every index is critical and for those with lower priority it is worth considering how many replicas they require.

 

Memory

Elasticsearch is software made in java. The assigned heap is therefore crucial for the proper functioning of the environment. Incorrectly scheduling the server's memory resources can contribute to a serious failure due to insufficient memory.

How to judge how much memory an elasticsearch node requires It depends on the size of the cluster and the amount of data we collect. Elasticsearch holds a lot of data in memory for quick access. It is recommended that an elasticsearch node does not have more than 25 shards (primary and replicas) per 1 GB of memory in the heap. It is worth noting that elasticsearch is not able to limit this for us. This is one of the fundamental steps of an administrator to monitor the number of shards per heap memory.

 

Performance

When it comes to query optimization, there is no secret that the main element is the structure of the query and the scope of data on which the query is triggered. In other words: 100GB data will be responded faster than 500GB data.

The more complicated the query and the more data is run, the later we will get the answer. Therefore, it is important to balance the relationship between the number of shards and their size.

It is recommended that one shard contains between 20GB and 40GB of data. Therefore, if we have an index of 100GB of data, it is worth allocating a total of 4 shards primary. Replicas are not included in this optimization procedure as they hold 1: 1 copies of the data of the primary shards.

 

Summary

The above aspects show that although elasticsearch is a powerful tool and one of the best engines for managing large amounts of data, it still needs careful attention in terms of optimization. Good planning of the indexes and shards structure will allow you to enjoy a stable and very efficient cluster environment.

Text vs Keyword

There are two types of data in elasticsearch that are often troublesome for people inexperienced in working with the system - Keyword and Text. Both types are a kind of string, but elasticsearch interprets them differently, so you can perform different operations on them.

 

How file type is made?

In general, the type of field is determined by the template. It is a instruction for creating indexes in elasticsearch - including field mappings. If a given template does not clearly specify what type of field the given field should be, elasticsearch will by default create dynamic mapping for both Keyword and Text. However, it is not recommended to work in that manner, due to the disk space that can be saved by planning assigned types to fields.

 

Inverted index

To understand the importance of the problem, look at the inverted index in elasticsearch[1] . It is a specific way in which the system saves data and thanks to this solution, elasticsearch is very fast in returning large amount of dokuments even on a big time scale.

The inverted index is similar to the index in some books. At the end of the book you can find a list of words with information on which pages these words appear on. Elasticsearch does exactly the same for each word that mentions which document it is in.

Example

For example. let's look at document with id "my-document". It has a field of "address" with value of "Wiejska 20, Warsaw".
POST "localhost:9200/my-index/_doc/my-document" -d'
{
"address" : "Wiejska 20, Warsaw"
}'

Below is a presentation of how elasticsearch sees it in the inverted index for both types

Difference in query

Elasticsearch first checks the inverted index for a match when it receives a query. If it finds them, it displays the documents that match that query. Therefore, if we query elasticsearch for the value "Warsaw", a document with the Keyword type may not return the result because the value is literally "Wiejska 20, Warsaw". The opposite is true in the case of the Text type - because the field content has been analyzed, elasticsearch is able to find single words in the inverted index and return the answer in the form of a document.

Of course, there are still different kinds of search queries to Elasticsearch, and depending on which ones are used, you may get different results. This point, however, does not directly touch upon the differences in the field types themselves.

 

Summary

The differences between the two types are significant. We generally use keyword types for constant values in the data that we do not want to analize, eg country name, application name, etc. We use the Text type when we need to use the full text search power, eg in the message field that contains the original message from the system.

 

[1] In truth, the inverted index is a aspect of the Apache Lucene engine on which Elasticsearch was developed. A shortcut was used to make understanding of the problem easier.

Newsletter 7.0.4 Feb update: Wiki

Welcome.

Second month went really fast here at Energy Logserver. We are full of motivation, as it seems that each day bring new challenges to tackle.
We are currently working on few game changing modules for Energy Logserver. Right now, we’ve finished polishing and updating existing functionalities and are happy to share more information with you. For presentation of interesting features, please register to our webinar, more information below:

Webinar Energy Logserver 7.0.4 – User views and advanced permissions
25.03.2021 10:00 AM Warsaw

https://zoom.us/webinar/register/WN_UZscLOiwRbm1FzHa0_mPeQ

Changes
Custom views
Roles now can have assigned modules to permissions, which allow to create dedicated views. This advanced but easy to use feature allows for such use cases as:
- Creating dedicated view for analytical teams, who doesn’t need to see raw data.
- SOC views, who need to see clear information and need to be able to switch between information from different modules.
- Create clear and separate views for different department, even if data is still centralized within one environment.

Wiki
Energy Logserver now has built-in wiki solution. This allows you to keep easy to read documentation and information about your infrastructure. By using Wiki with Energy Logserver you make sure that information are easy to read and available only to those who should have access to it. This module give opportunity to provide new functionalities and use cases in Energy Logserver eg. CMDB, automatic documentation creation based on collected data and much more.

WEC
Windows Event Forwarding (WEF) is common topic within environment with large Windows infrastructure. Idea is to have option to share data without installing agent. Windows Event Collector (WEC) is therefore required to be also Windows machine, which cause some problems to maintain and monitor. We’ve developed standalone solution which is intended to gather data from Windows to Linux system!
Energy Event Collector is designed to use native Windows technology but expand its functions from closed environment and allow more flexible approach.
See more information here: www.eventcollector.com

Archive
Automatic archiving data is important aspect of managing storage space inside Energy Logserver. This module was introduced with previous newsletter. We believe it is worth to highlight how important archive management is. We are constantly improving and optimizing archive module functions.

Stay safe and happy searching!
Energy Logserver team

Vulnerability Scanner integration

Vulnerabilities are common problem in IT community. Some are serious, other – not so much. Most important is to know if any vulnerabilities are present in our systems, how critical they are and how they can be fixed. There are tools that can help you do get that information, called vulnerability scanners.

Vulnerability scanners check your systems and applications if they have any of known exploits or backdoors. Similar to antivirus software, these programs have their own databases which needs to be updated regularly. Reports of such scan can be however long and hard to analyze.

Energy Logserver brings good news for all who want to increase visibility and security correlation over their environment. We are now integrating with leading solutions: Qualys, Tennable.sc and of course OpenVAS 🙂

New dashboard dedicated for vulnerability scanners also has been added on top of that integration. It means that we can not only gather data and integrate with vulnerability scanners but also we give you the ability to see exactly what has been detected in your infrastructure and correlate it with data from different sources.

Interactive dashboard is equipped with list of found problems and recommended solutions, list of top hosts with color-coded vulnerabilities detected and many more.

If you are interested in this functionality or your version of Energy Logserver does not have it implemented, reach out to our team or at sales[at]energylogserver.pl

Newsletter 7.0.4 and Webinar: archive mdoule

Welcome.

Happy New Year from Energy Logserver Team
We are glad to announce that Energy Logserver is currently at version 7.0.4.
This version brings some amazing changes along with new module – archive, which lets you manage automatic archiving data. More about it below.

First thing. If you are interested in seeing new features in action, we would like to invite you to our upcoming webinar!

Webinar Energy Logserver 7.0.4 – Archive module
21.01.2021 10:00 AM Warsaw

https://zoom.us/webinar/register/WN_sseWJaxzRmW6m_a7hR93uA

Major changes

New module: Archive
Allows to configure automatic archiving of selected indexes and send them to selected destination. More than that, with this module you are able to search on archived data without moving them back to the system. Restoration process also is really simple – just choose archives that you’d like to restore and it’s done!

Improved agent management module
Agent management looks not only better but now it also gives you option to remotely restart desired agent. Also you are now able to monitor/change custom configurations, not related to Energy Logserver agents.

New default integration
We’ve added new integration to Energy Logserver. Now Energy Logserver 7.0.4 by default has integration with vulnerability scanners and brings amazing dashboard to visualize all the data you’ve gathered.

Upgraded alerts
Alerts module now support name change of existing alerts (we know that lots of you were waiting for that ;)) but more than that, you can group alerts together, like Windows related, etc. Furthermore we’ve added GUI support for notification on Slack and OP5 Monitor. And finally – we’ve added two new options:
• Calendar, which allows you to manage time of alert notifications based on cron format
• Escalate, which escalate alarm after specific time.

For full list of changes, visit: https://kb.energylogserver.pl/en/latest/CHANGELOG.html

If you are looking for some interesting use cases for Energy Logserver, you will find some below as well as you can visit our website:

Detecting and alerting user login events after office hour
https://energylogserver.pl/en/detecting-and-alerting-user-login-events-after-office-hour/

Detecting and alerting Abnormal Network Traffic Pattern
https://energylogserver.pl/en/detecting-and-alerting-abnormal-network-traffic-pattern/

Detecting and alerting DDoS attacks in Energy Logserver
https://energylogserver.pl/en/detecting-and-alerting-ddos-attacks-in-energy-logserver/

Stay safe and happy searching!
Energy Logserver team

Detecting and alerting user login events after office hour

This is one of most common alerts and is easily done with use of Energy Logserver. Even more – such alert is already predefined and placed in installation package by default. For Windows users we detect night logons.

This has been applied in our previous deployments for Linux users or users from dedicated services which are not related to specific operating system.

Such rule configuration can hardly be simpler:

More than that we can add to calendar option to every alert, so such alert will be triggered based on crontab format, for example:

calendar:
  schedule: "* 0-8,16-23 * * mon-fri"

 

Detecting and alerting Abnormal Network Traffic Pattern

For monitoring anomalies in traffic we are using multiple approaches. Of course we can support Energy Logserver with dedicated network probe, which is equipped with Netflow Analazing module and is detecting anomalies by default. Such probes is receiving netflow from selected span port and can be also used as virtual appliance.

Other than that we often move back to our alerting module, where we choose proper approach.

For some customers we are using metric aggregation type, where we set threshold for sent/received data.

But Energy Logserver has also set of predefined alerts and among them is: Netflow - DNS traffic abnormal of type Spike. This rule is comparing actual timeframe to previous one and calculate difference between them. By doing so we detect sudden spike of chosen pattern.

Another approach is to monitor new, unseen values in selected field (like new url address in our logs) per user, source or other parameter.

 

Energy Logserver is capable of connecting multiple alerts together in one, correlated by field and condition alert with types of Chain or Logical.

Detecting and alerting DDoS attacks in Energy Logserver

DDoS attack can be detected with Energy Logserver by few approaches, which we did in previous deployments with multiple customers. In all scenarios we are interested in getting notification or taking specific action based on detection, that is why we are using alerting. We can either integrate with firewall software, which is capable of detecting such attack OR we can create such detection independently.

In one approach alert type for this use case is frequency. We look for indicator of connection and count it by source ip. If there are more than 100 connections by 1 IP In 5 minutes – alert will be triggered

We can create same kind of alert per website with defined threshold of max visit.

 

Other option is to have both of those alerts created without notification and create correlation between them with usage of Logical alert type.