Vulnerability Scanner integration

Vulnerabilities are common problem in IT community. Some are serious, other – not so much. Most important is to know if any vulnerabilities are present in our systems, how critical they are and how they can be fixed. There are tools that can help you do get that information, called vulnerability scanners.

Vulnerability scanners check your systems and applications if they have any of known exploits or backdoors. Similar to antivirus software, these programs have their own databases which needs to be updated regularly. Reports of such scan can be however long and hard to analyze.

Energy Logserver brings good news for all who want to increase visibility and security correlation over their environment. We are now integrating with leading solutions: Qualys, Tennable.sc and of course OpenVAS 🙂

New dashboard dedicated for vulnerability scanners also has been added on top of that integration. It means that we can not only gather data and integrate with vulnerability scanners but also we give you the ability to see exactly what has been detected in your infrastructure and correlate it with data from different sources.

Interactive dashboard is equipped with list of found problems and recommended solutions, list of top hosts with color-coded vulnerabilities detected and many more.

If you are interested in this functionality or your version of Energy Logserver does not have it implemented, reach out to our team or at sales[at]energylogserver.pl

Newsletter 7.0.4 and Webinar: archive mdoule

Welcome.

Happy New Year from Energy Logserver Team
We are glad to announce that Energy Logserver is currently at version 7.0.4.
This version brings some amazing changes along with new module – archive, which lets you manage automatic archiving data. More about it below.

First thing. If you are interested in seeing new features in action, we would like to invite you to our upcoming webinar!

Webinar Energy Logserver 7.0.4 – Archive module
21.01.2021 10:00 AM Warsaw

https://zoom.us/webinar/register/WN_sseWJaxzRmW6m_a7hR93uA

Major changes

New module: Archive
Allows to configure automatic archiving of selected indexes and send them to selected destination. More than that, with this module you are able to search on archived data without moving them back to the system. Restoration process also is really simple – just choose archives that you’d like to restore and it’s done!

Improved agent management module
Agent management looks not only better but now it also gives you option to remotely restart desired agent. Also you are now able to monitor/change custom configurations, not related to Energy Logserver agents.

New default integration
We’ve added new integration to Energy Logserver. Now Energy Logserver 7.0.4 by default has integration with vulnerability scanners and brings amazing dashboard to visualize all the data you’ve gathered.

Upgraded alerts
Alerts module now support name change of existing alerts (we know that lots of you were waiting for that ;)) but more than that, you can group alerts together, like Windows related, etc. Furthermore we’ve added GUI support for notification on Slack and OP5 Monitor. And finally – we’ve added two new options:
• Calendar, which allows you to manage time of alert notifications based on cron format
• Escalate, which escalate alarm after specific time.

For full list of changes, visit: https://kb.energylogserver.pl/en/latest/CHANGELOG.html

If you are looking for some interesting use cases for Energy Logserver, you will find some below as well as you can visit our website:

Detecting and alerting user login events after office hour
https://energylogserver.pl/en/detecting-and-alerting-user-login-events-after-office-hour/

Detecting and alerting Abnormal Network Traffic Pattern
https://energylogserver.pl/en/detecting-and-alerting-abnormal-network-traffic-pattern/

Detecting and alerting DDoS attacks in Energy Logserver
https://energylogserver.pl/en/detecting-and-alerting-ddos-attacks-in-energy-logserver/

Stay safe and happy searching!
Energy Logserver team

Detecting and alerting user login events after office hour

This is one of most common alerts and is easily done with use of Energy Logserver. Even more – such alert is already predefined and placed in installation package by default. For Windows users we detect night logons.

This has been applied in our previous deployments for Linux users or users from dedicated services which are not related to specific operating system.

Such rule configuration can hardly be simpler:

More than that we can add to calendar option to every alert, so such alert will be triggered based on crontab format, for example:

calendar:
  schedule: "* 0-8,16-23 * * mon-fri"

 

Detecting and alerting Abnormal Network Traffic Pattern

For monitoring anomalies in traffic we are using multiple approaches. Of course we can support Energy Logserver with dedicated network probe, which is equipped with Netflow Analazing module and is detecting anomalies by default. Such probes is receiving netflow from selected span port and can be also used as virtual appliance.

Other than that we often move back to our alerting module, where we choose proper approach.

For some customers we are using metric aggregation type, where we set threshold for sent/received data.

But Energy Logserver has also set of predefined alerts and among them is: Netflow - DNS traffic abnormal of type Spike. This rule is comparing actual timeframe to previous one and calculate difference between them. By doing so we detect sudden spike of chosen pattern.

Another approach is to monitor new, unseen values in selected field (like new url address in our logs) per user, source or other parameter.

 

Energy Logserver is capable of connecting multiple alerts together in one, correlated by field and condition alert with types of Chain or Logical.

Detecting and alerting DDoS attacks in Energy Logserver

DDoS attack can be detected with Energy Logserver by few approaches, which we did in previous deployments with multiple customers. In all scenarios we are interested in getting notification or taking specific action based on detection, that is why we are using alerting. We can either integrate with firewall software, which is capable of detecting such attack OR we can create such detection independently.

In one approach alert type for this use case is frequency. We look for indicator of connection and count it by source ip. If there are more than 100 connections by 1 IP In 5 minutes – alert will be triggered

We can create same kind of alert per website with defined threshold of max visit.

 

Other option is to have both of those alerts created without notification and create correlation between them with usage of Logical alert type.

Webinar: Incident management in Energy Logserver – from SOC to Analytics

Welcome.

We hope that you are all staying safe and healthy in these interesting times. At Energy Logserver we are working non-stop to deliver best quality features for you. That is why we would like to share with you what is new.

Energy Logserver is currently at 7.0.3 version. In this version we strongly focused on event correlation and alerting along improved internal auditing. We want to highlight most interesting aspects of this version.

 

Major changes

Improved types of alerts:

  • Chain – possibility to tailor each individual rules one after another. The rule is triggered when threshold is met and the expected data sequence occurs. Example: detection of failed logins followed by success.
  • Logical – activates when selected alerts are triggered with defined logic. Example: detection of at least 3 failed logins OR root login AND 2 service configuration changes.

Agent module:

With Energy Logserver 7.0.3 expect new look of Agents Module, responsible for central management. We’ve improved reliability, better control for agents state and others – all available from user interface.

Skimmer:

Familiar with Skimmer, our internal monitoring process? New version provides more cluster health-check metrics, like:

  • Indexing rate – shows EPS in the system
  • Expected data nodes – Energy Logserver measures its performance and calculates how much data nodes it requires for optimal workflow.
  • Heap usage – shows assigned memory usage for every component group in Energy Logserver infrastructure
  • Disc space – monitor disc space usage, so you can see how much space is left for the data and you’ll never run into troubles.
  • and others..

If you are not familiar with Skimmer, then you definitely should check it out!

https://kb.energylogserver.pl/en/latest/21-00-00-Monitoring/21-00-00-Monitoring.html#skimmer

 

Our community is giving us new challenging questions, which we are addressing. We are happy to work along those who share love for monitoring software. Solving issues together is very satisfying. Here are some highlights:

How to deal with oversized Kafka documents in Logstash?

https://energylogserver.pl/en/how-to-deal-with-oversized-kafka-documents-in-logstash/

How to remove duplicated or not important messages from syslog?

https://energylogserver.pl/en/how-to-remove-duplicated-or-not-important-messages-from-syslog/

Why processing time of logstash DNS filter is slow?

https://energylogserver.pl/en/dns-logstash-filter-is-slow/

 

Webinars coming soon:

Incident management in Energy Logserver - from SOC to Analytics

10.12.2020 starting 11 AM CET

Click here to register: https://zoom.us/webinar/register/WN_r8Qzg_vPRd-Vhk5pT4L9kQ

Description:

During this webinar we will look at how to search data for errors and anomalies. We will create incidents and look at how to work with Energy Logserver from two perspectives - operational and analytical with dashboards.

 

Stay safe and happy searching!

Energy Logserver team

How to remove duplicated or not important messages from syslog?

Issue description

we all know this entry in the syslog:
... last message repeated ... times

can it somehow be easily ruled out?

Issue solution

Yes, they can. There are many ways to do so and below is only one such example:


filter {
  if [source] == "/ var / log / messages" {
    if [message] =~ / last message repeated [0-9] + times / {
      drop {}
    }
  }
}

DNS logstash filter is slow

Issue description

 

I've used the DNS filter on the logstash, but i can clearly see that the indexing speed has decreased by adding resolve.
Does it have to be so slow?

Logstash config from documentation:

filter {
  dns {
    reverse => [ "source_host", "field_with_address" ]
    resolve => [ "field_with_fqdn" ]
    action => "replace"
  }
}

Issue solution

 

In older versions of logstash (2018 *), after using the cache_size / failed_cache_size directive, there was a bug that prevented parallel cache polling.

A very nice analysis with performance graphs was carried out by the git user named robcowart:
https://github.com/logstash-plugins/logstash-filter-dns/pull/42

A ready config to use below - please note that full performance is obtained when the cache is full with data.
It's also worth using fast dns, e.g. 1.1.1.1/1.0.0.1

filter{
  # dns resolve
  dns {
    reverse => [ "hostname" ]
    action => "replace"
    nameserver => ["1.1.1.1", "1.0.0.1"]
    hit_cache_size => 131072
    hit_cache_ttl => 900
    failed_cache_size => 131072
    failed_cache_ttl => 900
  }

  # filter performance
  metrics {
    meter => "events"
    add_tag => "metric"
  }
}

output {
  if "metric" in [tags] {
    stdout {
      codec => line {
        format => "DNS filter rate: %{[events][rate_1m]}"
      }
    }
  }
}

How to deal with oversized Kafka documents in Logstash?

Issue description

 

Kafka does not accept documents because the documents are too large.

Increasing the limits does not help, because I have reached the level of 10MB and still some logstash events are still not sent to kafka.

After some time this results in the logstash queue being full, which in turn leads to the suspension of the entire pipeline ...

What is the best way to solve the above problem?

Logs
[2020-09-03T00:53:38,603][WARN ][logstash.outputs.kafka ] KafkaProducer.send() failed: org.apache.kafka.common.er rors.RecordTooLargeException: The message is 1223210 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. {:exception=>java.util.concurrent.ExecutionExcep tion: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1223210 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.}

[2020-09-03T00:53:38,644][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay . {:batch_size=>1, :failures=>1, :sleep=>0.1}

[2020-09-03T00:53:38,769][WARN ][logstash.outputs.kafka ] KafkaProducer.send() failed: org.apache.kafka.common.er rors.RecordTooLargeException: The message is 1223210 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. {:exception=>java.util.concurrent.ExecutionExcep tion: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1223210 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.}

[2020-09-03T00:53:38,770][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay . {:batch_size=>1, :failures=>1, :sleep=>0.1}

[2020-09-03T00:53:38,878][INFO ][logstash.outputs.kafka ] Exhausted user-configured retry count when sending to K afka. Dropping these events. {:max_retries=>1, :drop_count=>1}

[2020-09-03T02:15:12,763][WARN ][logstash.outputs.kafka ] KafkaProducer.send() failed: org.apache.kafka.common.er rors.RecordTooLargeException: The message is 1216262 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. {:exception=>java.util.concurrent.ExecutionExcep tion: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1216262 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.}

[2020-09-03T02:15:12,764][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay . {:batch_size=>1, :failures=>1, :sleep=>0.1}

[2020-09-03T02:15:12,871][WARN ][logstash.outputs.kafka ] KafkaProducer.send() failed: org.apache.kafka.common.er rors.RecordTooLargeException: The message is 1216262 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. {:exception=>java.util.concurrent.ExecutionExcep tion: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1216262 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.}

[2020-09-03T02:15:12,871][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay . {:batch_size=>1, :failures=>1, :sleep=>0.1}

  

Issue solution

 

Depending on the needs, we offer 3 solutions that should work:

1. Tagging of documents with more than 10,000 characters. in the message field.
Such document can be directed to, for example, the file with output file {} in the output section, and then preview and parse it accordingly, so that the message field is already cut into the appropriate fields. In this case, large documents will omit the output kafka and the logstash pipeline will not be full.

filter {
  ruby {
    code => "
      if event.get('message').length > 10000
      event.tag('TLTR')
      end
    "
  }
}

2. Truncate + Tagging.
The document will be truncated after the specified number of bytes and tagged so that it is known which message is truncated.
In this case, large documents will be truncated and correctly received on the kafka side, and the logstash pipeline will not be full.

filter {
  truncate {
    fields => ["message"]
    length_bytes => 49999999
    add_tag => "TLTR"
  }
}

3. Drop.
Useful when we know that "large documents" contain irrelevant information and we can afford to lose it. In this case, the document that will be bounced from the tile will be returned to the queue only for 1 try, and then it will be abandoned without clogging the logstash pipeline.
In the output section we must add:

retries=> 1

Future Tech Event with our partner – CyberX

We are proud to announce the Future Tech Event conference in Oman, whose platinum sponsor is our partner from the MENA region - CyberX.

Future Tech Event is an event presenting the latest ICT products and services, the latest devices, consumer electronics and the most modern intelligent technology in all sectors - including cybersecurity.

At this event, we will have the opportunity to listen to presentations by the founder of CyberX, Mohannad Alkalash, and our engineer - Szymon Ćwieka.

 

To sign up for the event and listen to the lectures, please click here: https://www.futuretechevent.com