reaction
A daemon that scans program outputs for repeated patterns, and takes action.
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
Welcome on reaction's Wiki!
Wiki rendered on https://reaction.ppom.me.
reaction source code at https://framagit.org/ppom/reaction.
For an introduction to reaction, see:
- Its README
- The tutorial, in French and in English.
- The example, fully commented configuration file, in YAML or in JSONnet.
This wiki is made of:
- reference: WIP configuration reference, explaining all concepts and configuration options.
- security: good practices to avoid giving arbitrary execution to attackers. A must read!
- FAQ: FAQ, help and good practices (including JSONnet).
- streams: good practices about stream sources
- filters: discover existing service configurations
- actions: discover existing actions
- configurations: discover real-world user configurations
- articles: Other places on the web discussing reaction
❤️ Please enhance this wiki with your own discoveries! ❤️
Tutorials
Official tutorials of reaction:
Reference
Configuration
reaction can be configured using the following formats:
See FAQ "What is JSONnet and why should I use it over YAML" for a rationale.
It can be configured with either:
- one configuration file,
- or a directory containing multiple configuration files:
- files are loaded and merged in alphanumeric order,
- files beginning with
.
or_
are ignored, - files not ending with a supported extension are ignored,
- no recursion in subdirectories.
Here's a graph of reaction's model. Each concept is explained below, along with all its configuration options.
Table of Contents
Stream
A Stream is an ASCII or UTF-8 text stream, typically logs.
cmd
An array of strings. The first string is the command, and the subsequent strings are the arguments.
reaction will execute the command to fetch the text stream.
command's stdout
and stderr
are read and treated equally by reaction.
See FAQ "Why start, stop, stream and action commands are arrays" for an explanation.
See Streams for details on how to write correct Stream commands for reaction.
{
streams: {
ssh: {
cmd: ['journalctl', '-f', '-n0', ...],
},
nginx: {
cmd: ['tail', '-F', '-n0', ...],
},
},
}
streams:
ssh:
cmd: ['journalctl', '-f', '-n0', ...]
web:
cmd: ['tail', '-F', '-n0', ...]
filters
We can attach one or more Filters to a Stream.
{
streams: {
ssh: {
cmd: ['journalctl', '-f', '-n0', ...],
filters: {
myfilter: {
...
},
myotherfilter: {
...
},
},
},
nginx: {
cmd: ['tail', '-F', '-n0', ...],
filters: {
...
},
},
},
}
streams:
ssh:
cmd: ['journalctl', '-f', '-n0', ...]
filters:
myfilter:
...
myotherfilter:
...
web:
cmd: ['tail', '-F', '-n0', ...]
filters:
...
Pattern
A pattern is essentially a regex.
It's included in Filters' regex to capture a specific part of the line, for example an IP or a username.
It's referenced in a Filter and Action by its name enclosed in <
and >
.
regex
The regex pattern.
{
patterns: {
name: {
regex: '[A-Z][a-z]*',
},
},
}
patterns:
name:
regex: '[A-Z][a-z]*'
ignore
A list of values to ignore.
{
patterns: {
name: {
regex: '[A-Z][a-z]*',
ignore: [
'Alice',
'Bob',
],
},
},
}
patterns:
name:
regex: '[A-Z][a-z]*'
ignore:
- 'Alice'
- 'Bob'
ignoreregex
A list of regex to ignore.
regex must match the full Match.
{
patterns: {
name: {
regex: '[A-Z][a-z]*',
ignoreregex: [
# Ignore names starting with Chr
'Chr.*',
],
},
},
}
patterns:
name:
regex: '[A-Z][a-z]*'
ignoreregex:
# Ignore names starting with Chr
- 'Chr.*'
type
Available since v2.2.0.
reaction ships with 3 special pattern types. Those types ship with a regex provided by reaction:
ip
, that matches both IPv4 and IPv6,ipv4
, that matches IPv4,ipv6
, that matches IPv6.
The default implicit type is regex
, for non-IP regexes.
{
patterns: {
ip: {
type: 'ip',
},
},
}
patterns:
ip:
type: 'ip'
{
patterns: {
ip4: {
type: 'ipv4',
},
},
}
patterns:
ip4:
type: 'ipv4'
ignorecidr
Available since v2.2.0.
Only for patterns of an IP type.
A list of IP networks to ignore, with CIDR notation.
{
patterns: {
ip: {
type: 'ip',
ignorecidr: [
'192.168.1.0/24',
'2001:db8:2345:3456::/64',
],
},
},
}
patterns:
ip:
type: 'ip'
ignorecidr:
- '192.168.1.0/24'
- '2001:db8:2345:3456::/64'
ipv4mask
and ipv6mask
Available since v2.2.0.
Only for patterns of an IP type.
Group IP Matches by network.
IPv6 are very cheap: malicious actors typically have 2^64 IPv6, with a /64 network mask. This is common even on residential IPs.
{
patterns: {
ip: {
type: 'ip',
ipv6mask: 64,
},
},
}
patterns:
ip:
type: 'ip'
ipv6mask: 64
With this configuration, those IPv6s will be grouped:
2001:db8:2345:3456::1
2001:db8:2345:3456::2
2001:db8:2345:3456::3
And the corresponding action will be run with the network mask:
2001:db8:2345:3456::/64
This is also possible for IPv4. Be careful doing this! Some actors may have only 1, 2, 4 IPs from a range, so this may be a bad idea.
{
patterns: {
ip: {
type: 'ip',
ipv4mask: 30, // 24 ...
ipv6mask: 64,
},
},
}
patterns:
ip:
type: 'ip'
ipv4mask: 30 # 24 ...
ipv6mask: 64
ipv4mask
only makes sense with patterns of typeip
andipv4
.ipv6mask
only makes sense with patterns of typeip
andipv6
.
Match
A Match is one concrete instance of a Pattern found in the logs.
For example, if we have a Pattern user
with regex [a-z]+
, and a Filter with regex ^hello <user>$
, the line hello charlie
will result in the Match charlie
.
A Match can be reused in Actions.
So an Action with the command ['echo', '<user>']
would run echo charlie
for the previous match.
Trigger
A Trigger is a Match that will make the related Filter execute its Actions.
How much Matches must happen before the Filter is triggered depends on its retry
and retryperiod
options.
Filter
Filters run Actions when they Match regexes on a Stream.
Filters are reaction's main component, enclosing most of its runtime logic.
A Filter is attached to a Stream and receive its text stream as an input.
It applies one or more regexes to each line. When there is one or more Match in a given period, the Filter is Triggered and execute one or more Actions.
regex
A list of regexes to try matching on the input Stream.
Whenever one of them matches, a Match is created.
streams:
nginx:
...
filters:
unauthorized:
regex:
- '^<ip> .* HTTP/\d\.\d" 401'
streams: {
nginx: {
...
filters: {
unauthorized: {
regex: [
'^<ip> .* HTTP/\d\.\d" 401',
],
},
},
},
}
Patterns can be referenced in the regexes by providing their name enclosed in <
and >
.
The Match will then contain the actual value matched by the Pattern and can be reused in Actions.
All regexes of a Filter must specify the same set of Patterns.
patterns:
ip:
type: 'ip'
streams:
nginx:
...
filters:
unauthorized:
regex:
- ' HTTP/\d\.\d" 401'
patterns: {
ip: {
type: 'ip',
},
},
streams: {
nginx: {
...
filters: {
unauthorized: {
regex: [
'^<ip> .* HTTP/\d\.\d" 401',
],
},
},
},
}
retry
How many Matches must happen before the Filter Triggers its Actions.
If not specified, defaults to 1. If specified, must be > 1.
Must be specified along with retryperiod
.
retryperiod
The retain period for Matches.
Must be specified along with retry
.
Format is defined as follows:
<number> <unit>
- whitespace between the integer and unit is optional
- number must be a positive integer (>= 0, no floating point)
- unit can be one of:
ms
/millis
/millisecond
/milliseconds
s
/sec
/secs
/second
/seconds
m
/min
/mins
/minute
/minutes
h
/hour
/hours
d
/day
/days
Examples:
{
streams:
stream1: {
filters: {
filter0: {
regex: [ ... ],
// no retry/retryperiod, trigger as soon as there is a match
},
filter1: {
regex: [ ... ],
// 2 matches in 10 seconds to trigger
retry: 2,
retryperiod: '10 secs',
},
filter2: {
regex: [ ... ],
// 4 matches in 30 minutes to trigger
retry: 4,
retryperiod: '30m',
},
filter3: {
regex: [ ... ],
// 3 matches in 1 day to trigger
retry: 3,
retryperiod: '1day',
},
},
},
},
}
duplicate
Available since v2.2.0.
Specify what reaction must do when the Filter matches an already Triggered Match.
3 behaviors are possible: extend
, rerun
and ignore
.
Before v2.2.0, reaction's filters used to execute the same actions multiple times.
For example, when an IP address was spamming the server, due to the latency
between the moment the request was received and the moment the ban action was
executed by reaction, and taken into account by the firewall, other logs
concerning this IP address could appear, resulting in a new trigger of the
reaction filter.
Therefore an IP could be banned multiple times, resulting in bugs on some
firewalls that deduplicate IPs, like ipset
and nftables
.
The new behavior defaults to extending the trigger period.
Let's take this filter as an example:
{
regex: [ @'Failed password for .* from <ip>' ],
actions: {
ban: {
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: '2h',
},
},
}
If a filter is triggered at 0:00, the ban
action runs, and the unban
action is scheduled at 2:00.
If for some reason, some new matches appear at 0:10:
- reaction defaults to extending the
unban
. So it won't schedule a newban
command. It will instead reschedule the existingunban
action to 2:10. This default behavior can be explicitly specified by:duplicate: 'extend' // the default
- reaction can launch a new
ban
command, and schedule a newunban
command at 2:10. This (old default) behavior is possible by adding this parameter:duplicate: 'rerun'
- reaction can also simply ignore new matches for this IP, until the
unban
command runs:duplicate: 'ignore'
actions
We can attach one or more Actions to a Filter.
{
streams: {
...
filters: {
failedlogin: {
regex: [ ... ],
actions: {
...
},
},
connectionreset: {
regex: [ ... ],
actions: {
...
},
},
},
},
}
streams:
...
filters:
failedlogin:
regex:
...
actions:
...
connectionreset:
regex:
...
actions:
...
Action
Actions are commands that a Filter runs when it is Triggered.
Actions are the reaction!
cmd
An array of strings. The first string is the command, and the subsequent strings are the arguments.
See FAQ "Why start, stop, stream and action commands are arrays" for an explanation.
{
action1: {
cmd: [ '/root/myscript.sh' ],
},
}
Patterns specified in the Action's Filter can
be referenced in the command by providing their name enclosed in <
and >
.
It will be substitued by the actual Match value.
{
action2: {
cmd: [ 'iptables', '-I', ..., '<ip>' ],
},
}
after
By default, actions are executed as soon as the Filter is Triggered.
An action can be delayed with after
.
Format is defined as follows:
<number> <unit>
- whitespace between the integer and unit is optional
- number must be a positive integer (>= 0, no floating point)
- unit can be one of:
ms
/millis
/millisecond
/milliseconds
s
/sec
/secs
/second
/seconds
m
/min
/mins
/minute
/minutes
h
/hour
/hours
d
/day
/days
{
action1: {
cmd: [ ... ],
// no after, action is run immediately
},
action2: {
cmd: [ ... ],
after: '10 secs',
},
action3: {
cmd: [ ... ],
after: '30m',
},
action4: {
cmd: [ ... ],
after: '1day',
},
}
A delayed action that is scheduled and that will run later is called a pending action.
onexit
Whether to run this action when pending (scheduled by a Filter for later) on exit.
Defaults to false.
For firewalls, it's much more performant to just flush the chain or IP set than to removing each IP one by one.
Only makes sense when after
is set.
{
action1: {
cmd: [ ... ],
},
action2: {
cmd: [ ... ],
after: '2h',
onexit: true,
},
}
action1:
cmd: ...
action2:
cmd: ...
after: '2h'
onexit: true
oneshot
Available since v2.1.0.
Whether this action will rerun when reaction restarts.
Useful for alerting actions, such as sending a mail.
When reaction restarts, you surely want an IP to be banned again, but not to send the associated alert again.
Defaults to true.
{
ban: {
cmd: [ ... ],
},
unban: {
cmd: [ ... ],
after: '12h',
},
mail: {
cmd: [ ... ],
oneshot: true,
},
}
ban:
cmd: ...
unban:
cmd: ...
after: '12h'
mail:
cmd: ...
oneshot: true
ipv4only
and ipv6only
Available since v2.2.0.
Only makes sense when the underlying Filter's regex
has a Pattern of type
ip
.
ipv4only
: only execute this action when thetype: ip
Pattern matches an IPv4.ipv6only
: only execute this action when thetype: ip
Pattern matches an IPv6.
Both options are mutually exclusive.
{
ban: {
cmd: [ 'iptables', ... ],
ipv4only: true,
},
ban6: {
cmd: [ 'ip6tables', ... ],
ipv6only: true,
},
}
ban:
cmd: [ 'iptables', ... ]
ipv4only: true
ban6:
cmd: [ 'ip6tables', ... ]
ipv6only: true
Persistance
TL;DR: when an
after
Action is set in a Filter, such Filter acts as a jail, which is persisted after reboots.
When a filter is triggered, there are 2 possibilities:
-
If none of its Actions have an
after
directive set:- no action will be replayed when reaction restarts.
-
If at least one Action has an
after
directive set:If reaction stops while
after
Actions are pending,and reaction starts again when those actions would still be pending, reaction:
- executes the past actions (actions without after or with then+after <= now),
- plans the execution of future actions (actions with then+after > now).
Top-level options
start
and stop
An array of arrays of strings.
So it's an array of commands just like Stream's cmd
and Action's cmd
.
start
specifies a list of commands that will be run on start,
after initialization and before streams are started.
If any of the commands exit with a non-zero exit code,
reaction will exit with an error.
start
can be useful to prepare useful state for actions (intializing firewall, etc).
stop
specifies a list of commands that will be run on stop,
after all pending Actions (with onexit: true
) are run.
stop
can be useful to clean state set bystart
and Actions.
state_directory
Where reaction's internal state is stored.
Defaults to .
(the working directory, ie. the directory from which reaction is run).
state_directory: /var/lib/reaction
{
state_directory: "/var/lib/reaction",
}
Currently, only a
reaction.db
file is stored, but this will change in the future.Releases before v2.0.0 used other files, which can be safely removed.
concurrency
Integer that limits the maximum number of parallel actions.
- If set to a positive number, this will be the maximum number of parallel actions.
- If not specified or set to 0, it defaults to the number of CPUs on the system.
- If set to a negative number, there will be no limit on the number of parallel actions.
concurrency: 8
{
concurrency: 8
}
⚠️ Important security notice
Be careful when writing regexes. Try to ensure no malicious input could be injected in your regexes. It's better if your actions are direct commands, and not inline shell scripts.
If you use products of regexes in shell scripts, double-check that all user input is correctly escaped. Make use of tools like shellcheck to analyze your code.
Avoid using this kind of commands that mix code and user input:
['sh', '-c', 'mycommand <pattern>']
Example of a configuration permitting remote execution:
insecure.jsonnet
{
patterns: {
user: {
regex: @'.*',
},
},
streams: {
myservice: {
cmd: ['tail', '-f', '/tmp/reaction-example'],
filters: {
myfilter: {
regex: [
@'Connection of <user> failed',
],
// [...]
actions: {
myaction: {
cmd: ['sh', '-c', 'echo "<user>"'],
},
},
},
},
},
},
}
Let's launch reaction in one terminal:
$ touch /tmp/reaction-example
$ reaction start -c insecure.jsonnet
Then let's append malicious data to the tail
ed file in another terminal:
$ echo 'Connection of "; mkdir malicious-directory" failed' >> /tmp/reaction-example
We simulated an attacker supplying "; mkdir malicious-directory"
as a username in a random service
which doesn't check for non-alphanumeric characters in its usernames.
reaction will then launch the command:
myaction: {
cmd: ['sh', '-c', 'echo "<user>"'],
},
Substituting <user>
with the malicious input:
['sh', '-c', 'echo ""; mkdir malicious-directory""']
Of course, here it's a mkdir
but it can be anything.
IP pattern
Starting with reaction v2.2.0, patterns of type ip
, can be specified. The regex is provided by reaction and is guaranteed to contain only those characters: 0123456789.:
.
Those characters have no special meaning in sh
, so having an sh -c ... <ip>
should be safe.
Example
Let's take a common example: Appending a line to a file.
Problematic command
actions: {
problematic: {
cmd: [ 'sh', '-c', 'echo "<name>" >> /tmp/file' ],
},
}
actions:
problematic:
cmd: [ 'sh', '-c', 'echo "<name>" >> /tmp/file' ]
Bash solution
actions: {
bash: {
cmd: [ 'sh', '/append.sh', '<name>' ],
},
}
actions:
bash:
cmd: [ 'sh', '/append.sh', '<name>' ]
/append.sh
:
echo "$1" >> /tmp/file
Python inline solution
Python supports inline scripts separated from user input.
actions: {
python: {
cmd: [
'python',
'-c',
'import sys; open("/tmp/file", "a+").write(sys.argv[1] + "\n")',
'<name>'
],
},
}
actions:
python:
cmd:
- 'python'
- '-c'
- 'import sys; open("/tmp/file", "a+").write(sys.argv[1] + "\n")'
- '<name>'
Summary
Regexes are powerful, and need to be carefully written. Avoid having too permissive regexes capturing user input.
When executing scripts in actions, code and user input must be clearly separated.
Save scripts in files and call them
['bash', '/path/to/script', 'arg1', '...']
instead of having inline
['bash', '-c', 'command arg1 && command arg2']
when dealing with user input.
FAQ
- What is JSONnet? Why should I use it over YAML?
- Why start, stop, stream and action commands are arrays?
- How do I write defaults at one place and reuse them elsewhere?
- How do I add multiple actions defined by JSONnet functions on the same filter?
- How do I separate my configuration in multiple files?
- How do I use environment variables in actions?
What is JSONnet? Why should I use it over YAML?
JSONnet already has a good tutorial to start with.
It permits to define functions, variables, etc, and thus to avoid duplication.
Other configuration languages exist, like Nix, Dhall, etc. You should feel at home if you already used one of those.
JSONnet semantics also borrow from Javascript and Python.
Why start, stop, stream and action commands are arrays?
This is not always common knowledge, but this is how UNIX actually works.
Shells, like Bash, will perform a few actions on your command-line before creating a new process with the execve syscall (man 2 execve
).
Those actions include:
- Expanding globs, most commonly
*
- Splitting parameters on white space (but not within single quotes and double quotes)
- ...
Here's a list of Bash commands and what process are actually created:
command line | execve syscall | stdout |
---|---|---|
echo hello world | ['echo', 'hello', 'world'] | prints hello world |
echo "hello world" | ['echo', 'hello world'] | prints hello world |
echo hello world | ['echo', 'hello', 'world'] | prints hello world |
echo "hello world" | ['echo', 'hello world'] | prints hello world |
echo /b* | ['echo', '/bin', '/boot'] | prints /bin /boot |
What is important to understand here is that quotes, globs, etc, are not seen by
echo
.echo
is executed only after the shell have performed those substitutions.
So commands written in reaction will be executed as is. The only substitution performed by reaction is to replace a Pattern by its corresponding Match in Actions.
See Security for reasons why you should be careful when executing actions with shell scripts. TL;DR:
['sh', '-c', 'inline script']
can lead to code injection by attackers.
How do I write defaults at one place and reuse them elsewhere?
While we don't recommend having defaults and prefer explicit configuration, it is possible with JSONnet.
A defaults set must first be defined.
Here's an example for default options for a filter:
local filter_default = {
retry: 3,
retryperiod: '3h',
actions: banFor('24h'),
};
Then it can be used in filters:
{
streams: {
ssh: {
filters: {
failedlogin: filter_default + {
regex: ['...'],
},
},
},
},
}
Defaults can be overriden:
{
streams: {
ssh: {
filters: {
failedlogin: filter_default + {
regex: ['...'],
// retry is overriden here
retry: 1,
},
},
},
},
}
And the +
is optional.
{
streams: {
ssh: {
filters: {
failedlogin: filter_default {
regex: ['...'],
},
},
},
},
}
How do I add multiple actions defined by JSONnet functions on the same filter?
Let's take this example: we have two functions defining actions:
The first is made to ban the IP using linux's iptables firewall:
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
after: time,
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
};
The second sends a mail:
local sendmail(text) = {
mail: {
cmd: ['sh', '-c', '/root/scripts/mailreaction.sh', text],
},
};
Both create a set of actions. We want to merge the two sets.
To merge two sets with JSONnet, it's as easy as set1 + set2
.
Let's see what it looks like with a real example.
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
after: time,
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
};
local sendmail(text) = {
mail: {
cmd: ['sh', '-c', '/root/scripts/mailreaction.sh', text],
},
};
{
streams: {
ssh: {
filters: {
failedlogin: {
regex: [
// skipping
],
retry: 3,
retryperiod: '3h',
actions: banFor('720h') + sendmail('banned <ip> from service ssh'),
},
},
},
},
}
This will generate this configuration:
{
"streams": {
"ssh": {
"filters": {
"failedlogin": {
"regex": [ ],
"retry": 3,
"retryperiod": "3h",
"actions": {
"ban": {
"cmd": [ "ip46tables", "-w", "-A", "reaction", "-s", "<ip>", "-j", "DROP" ]
},
"unban": {
"after": "720h",
"cmd": [ "ip46tables", "-w", "-D", "reaction", "-s", "<ip>", "-j", "DROP" ]
},
"mail": {
"cmd": [ "sh", "-c", "/root/scripts/mailreaction.sh", "banned <ip> from service ssh" ]
}
}
}
}
}
}
}
How do I separate my configuration in multiple files?
Starting with reaction v2.0.0, you can specify a folder containing multiple configuration files. reaction will read and merge all files that:
- Do not start with
.
or_
. - And that ends with
.json
,.jsonnet
,.yml
or.yaml
.
JSONnet files starting with _
or .
will not be directly read.
You can manually import them in other files, for example to store definitions used accross files:
You can use the import
JSONnet keyword to import files, relative to the file calling them. (Remember, the JSONnet tutorial is a good place to understand its basics).
Here's an example of how you could do this:
/etc/reaction/main.jsonnet
{
streams: {
ssh: {
cmd: [ "tail", "-F", /* skip */ ],
},
// ...
},
}
/etc/reaction/ssh.jsonnet
local lib = import '_lib.jsonnet';
{
streams: {
ssh: {
filters: {
failedlogin: lib.filter_default + {
regex: ['...'],
retry: 3,
retryperiod: '2h',
actions: lib.banFor('30d'),
},
},
},
},
}
/etc/reaction/_lib.jsonnet
(definitions you can reuse in all other files)
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
after: time,
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
};
local filter_default = {
retry: 3,
retryperiod: '3h',
actions: banFor('24h'),
};
{
banFor: banFor,
filter_default: filter_default,
}
You can split different patterns, streams and filters accross files.
However, you can't spread the definition of one pattern or filter accross multiple files.
How do I use environment variables in actions?
Environment variables available to the reaction daemon are also available to stream's and actions's commands.
Specifying environment variables
You may specify Environment
or EnvironmentFile
attributes to the systemd file:
/etc/systemd/system/reaction.service
or /etc/systemd/system/reaction.service.d/env.conf
:
[Service]
Environment=MY_ENV_VAR=...
EnvironmentFile=/path/to/secrets/file
See man systemd.exec
for more details. These options can be specified multiple times.
Using environment variables in scripts
Let's say you run a command with a custom shell script:
cmd: ['/usr/local/bin/my_script.sh', '<ip>']
You may use environment variables inside, for example: $MY_ENV_VAR
.
echo "$MY_ENV_VAR"
For a Python script, variables are accessible with os.environ
.
import os
print(os.environ['MY_ENV_VAR'])
Directly substituting environment variables in reaction actions
Note that this is not substituted, because this is a shell feature and reaction doesn't currently have any shell-like features:
# Doesn't work!
cmd: ['curl', '$HOOK_URL', '--json', '{ ... }']
See issue #127 for discussion.
Streams
When defining a Stream, you should use a command that follows new writes on logs and print them as they arise.
The command should not print older lines.
For example, tail -F /var/log/nginx/access.log
will print the last 10 lines first, then follow appended lines.
This a problem because restarting reaction will result in the same 10 logs potentially printed multiple times.
Examples of good commands:
Plain file
Follow logs of one file
tail -F -n0 <FILE>
Follow multiple files as one stream.
tail -q -F -n0 <FILE1> <FILE2>
Follow multiple files as one stream.
This alternative pattern can work for any command.
sh
will launch multiple commands in background, then until all of them exit.
sh -c 'tail -F -n0 <FILE1> & tail -F -n0 <FILE2> & wait'
⚠️ tail -f
and logrotate
When files are rotated, tail -f
may stay on the rotated file and miss new inputs.
Using tail -F
instead permits to listen on a specific path, even if the actual file under it changes.
See its manual for more details.
SystemD / JournalD
Logs of one systemd unit
journalctl -fn0 -u <UNIT>
Logs of multiple systemd units
journalctl -fn0 -u <UNIT> -u <UNIT>
Docker
Logs of one container
docker logs -fn0 <CONTAINER>
Logs of all the services of a docker compose file
docker compose --project-directory /path/to/directory logs -fn0
stdout/stderr
Since reaction v2, stdout and stderr are both read.
If you run reaction v1, you may use this trick to merge both in stdout for reaction:
⚠️
docker logs
print program'sstderr
tostderr
as well, andreaction
only readsstdout
before v2.0.0. We might need to capturestdout
andstderr
if running reaction v1 and your Docker container logs to stderr:
cmd: ['sh', '-c', 'exec docker logs -fn0 <container> 2>&1']
There is virtually no overhead, as the sh
process replaces itself with the docker logs
command.
Non UTF8 streams
Since reaction v2.1.0, reaction just ignores non UTF8 data. It will be stripped from the log lines.
Non-UTF8 data previously aborted the stream.
Filters
Here, you will find examples of filters for different programs’ logs.
- AI crawlers (ChatGPT...)
- Dolibarr
- Directus
- Nextcloud
- Nginx
- Slskd
- SSH
- Traefik
- Web crawlers
- Web servers common log format
Web AI crawlers
Configuration to ban GPTBot and friends. Here the idea is to look for their User-Agents in your webserver logs.
You may as well Disallow
those user agents from looking at your websites in a robots.txt
file.
I personnally prefer banning them, to save ressources and be less cooperative to them.
Note that an AI bot may give a browser-like User Agent and go unnoticed...
While the goal of this is to prevent AI bots from feeding themselves with your websites, banning search engine bots may affect how your appear in search results.
They seem to have separate user agents for AI and for search, but who knows?
A (most probably incomplete) list of user agents based on https://darkvisitors.com/agents:
ChatGPT-User
DuckAssistBot
Meta-ExternalFetcher
AI2Bot
Applebot-Extended
Bytespider
CCBot
ClaudeBot
Diffbot
FacebookBot
Google-Extended
GPTBot
Kangaroo Bot
Meta-ExternalAgent
omgili
Timpibot
Webzio-Extended
Amazonbot
Applebot
OAI-SearchBot
PerplexityBot
YouBot
(Feel free to add your own discoveries to this list!)
As a pattern, we'll use ip. See here.
JSONnet example:
local bots = [ "ChatGPT-User", "DuckAssistBot", "Meta-ExternalFetcher", "AI2Bot", "Applebot-Extended", "Bytespider", "CCBot", "ClaudeBot", "Diffbot", "FacebookBot", "Google-Extended", "GPTBot", "Kangaroo Bot", "Meta-ExternalAgent", "omgili", "Timpibot", "Webzio-Extended", "Amazonbot", "Applebot", "OAI-SearchBot", "PerplexityBot", "YouBot" ];
{
streams: {
nginx: {
cmd: ['...'], // see ./nginx.md
filters: {
aiBots: {
regex: [
// User-Agent is the last field
// Bot's name can be anywhere in the User-Agent
// (hence the leading and trailing [^"]*
@'^<ip> .* "[^"]*%s[^"]*"$' % bot
for bot in bots
],
actions: banFor('720h'),
},
},
},
traefik: {
cmd: ['...'], // see ./traefik.md
filters: {
aiBots: {
regex: [
// request_User-Agent is the last field
// the field is not present by default
// see ./traefik.md to add this header field
// Bot's name can be anywhere in the User-Agent
// (hence the leading and trailing [^"]*
@'^.*"ClientHost":"<ip>".*"request_User-Agent":"[^"]*%s[^"]*"' % bot
for bot in bots
],
actions: banFor('720h'),
},
},
},
},
}
YAML Example:
streams:
nginx:
cmd: ['...'] # see ./nginx.md
filters:
aiBots:
regex:
# User-Agent is the last field
# Bot's name can be anywhere in the User-Agent
# (hence the leading and trailing [^"]*
- '^<ip>.*"[^"]*ChatGPT-User[^"]*"$'
- '^<ip>.*"[^"]*DuckAssistBot[^"]*"$'
- '^<ip>.*"[^"]*Meta-ExternalFetcher[^"]*"$'
- '...' # Repeat for each bot
actions: '...' # your ban actions here
Dolibarr
Configuration for a Dolibarr instance. Dolibarr is one of the leading open source ERP/CRM web applications.
As an action, we'll use iptables. See here.
As a pattern, we'll use ip. See here.
Dolibarr "logs" module must be activated !
{
streams: {
// Ban hosts failing to connect to Dolibarr
dolibarr: {
cmd: ['tail', '-fn0', '/path/to/dolibarr/documents/dolibarr.log'],
filters: {
bad_password: {
regex: [
@'NOTICE <ip> .*Bad password, connexion refused',
],
retry: 3,
retryperiod: '1h',
actions: banFor('48h'),
},
},
},
},
}
Directus
Configuration for the Directus web service.
Directus doesn't log failed login attempts, so one must read the logs of the reverse proxy configured in front of Directus.
The HTTP code sent after a failed login is 401
, Unauthorized.
The request to authenticate on Directus is a POST on /auth/login
.
As a pattern, we'll use ip. See here.
A regex for nginx can look like this:
@'^<ip> .* domain.name "POST /auth/login HTTP/..." 401 '
- adjust https://domain.name according to your domain.
- if directus is served on a subpath, say
/editor
, then adjust toPOST /editor/auth/login
Example:
{
streams: {
nginx: {
cmd: ['...'], // see ./nginx.md
filters: {
directus: {
regex: [
@'^<ip> .* directus.domain "POST /auth/login HTTP/..." 401 '
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
Nextcloud
Configuration for the Nextcloud web service.
Nextcloud logs failed login attempts, so we will read Nextcloud logs.
We can't use reverse proxy's logs,
because when a user logins,
using a POST on /login
,
the HTTP status code responded by Nextcloud is always the same:
303
, See Other.
(That means the client has to reload the same page, but using GET).
As a pattern, we'll use ip. See here.
See Nextcloud documentation on logging to check where your application logs are.
There are multiple log configurations possible with Nextcloud. The example covers 2 cases, but there are more! Feel free to contribute your own if you think it's relevant.
Example:
{
streams: {
nextcloud: {
// with a PHP-FPM worker logging to systemd
cmd: ['journalctl', '-fn0', '-u', 'phpfpm-nextcloud.service'],
// when logging to a file
cmd: ['tail', '-fn0', 'NEXTCLOUD_DIR/data/nextcloud.log'],
filters: {
nextcloud: {
regex: [
@'"remoteAddr":"<ip>".*"message":"Login failed:',
@'"remoteAddr":"<ip>".*"message":"Trusted domain error.',
],
retry: 3,
retryperiod: '1h',
actions: banFor('3h'),
},
},
},
},
}
Nginx
Configuration for the Nginx web server.
Nginx most often logs to /var/log/nginx/access.log
.
The Common Log Format, used by multiple webservers, is described in another wiki page.
Examples in this wiki use this configuration in nginx's http { }
block:
log_format withhost '$remote_addr - $remote_user [$time_local] $host "$request" $status $bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log withhost;
As a pattern, we'll use ip. See here.
This permits an easy access to the IP, as it's at the very beginning. It also permits to have access to the domain name, which is useful when nginx is configured for multiple virtual hosts.
A regex for nginx can look like this:
@'^<ip> .* "POST /auth/login HTTP/..." 401 '
// ^ ^ ^
// Method Path Status Code
Or this:
@'^<ip> .* domain.name "POST /auth/login HTTP/..." 401 '
// ^ ^ ^ ^
// Domain Method Path Status Code
Adjust domain.name
according to your domain
Example:
{
streams: {
nginx: {
cmd: ['tail', '-n0', '-f', '/var/log/nginx/access.log'],
filters: {
directus: {
regex: [
@'^<ip> .* directus.domain "POST /auth/login HTTP/..." 401 ',
],
actions: banFor('1h'),
},
},
},
},
}
You can decide that all 401
, Unauthorized, and 403
, Forbidden, are suspicious, and have a filter for any 401 and 403:
Example:
{
streams: {
nginx: {
cmd: ['tail', '-n0', '-f', '/var/log/nginx/access.log'],
filters: {
all403s: {
regex: [
@'^<ip> .* "(POST|GET) /[^ ]* HTTP/..." (401|403) ',
],
retry: 15,
retryperiod: '5m',
actions: banFor('1h'),
},
},
},
},
}
slskd
Configuration for the slskd web service.
slskd doesn't log failed login attempts, so one must read the logs of the reverse proxy configured in front of slskd.
The HTTP code sent after a failed login is 401
, Unauthorized.
The request to authenticate on slskd is a POST on /api/v0/session
.
As a pattern, we'll use ip. See here.
A regex for nginx can look like this:
@'^<ip> .* slskd.domain "POST /api/v0/session HTTP/..." 401 ',
- adjust https://slskd.domain according to your domain.
- if slskd is served on a subpath, say
/slskd
, then adjust toPOST /slskd/auth/login
Example:
{
streams: {
nginx: {
cmd: ['...'], // see ./nginx.md
filters: {
slskd: {
regex: [
@'^<ip> .* slskd.domain "POST /api/v0/session HTTP/..." 401 ',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
SSH
Configuration for the OpenSSH service.
As an action, we'll use iptables. See here.
As a pattern, we'll use ip. See here.
{
streams: {
// Ban hosts failing to connect via ssh
ssh: {
// Use systemd's `journalctl` to tail logs
cmd: ['journalctl', '-fn0', '-u', 'ssh.service'],
// ⚠️ may also be ↑ sshd.service, depends on the distribution
filters: {
failedlogin: {
regex: [
// Auth fail
@'authentication failure;.*rhost=<ip>',
// More specific auth fail
@'Failed password for .* from <ip>',
// Other auth failures
@'Connection from <ip> port [0-9]*: invalid format',
@'Invalid user .* from <ip>',
// Optional: Client disconnects during authentication
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip> port',
@'Connection (reset|closed) by <ip> port',
@'Disconnected from .* <ip> .*preauth',
@'Disconnecting .* <ip> .*preauth',
@'Timeout before authentication for <ip>',
@'Received disconnect from <ip> .*preauth',
@'Unable to negotiate with <ip> .*preauth',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
OpenBSD
{
streams: {
// Ban hosts failing to connect via ssh
ssh: {
// Use `/var/log/authlog` to tail logs
cmd: ['tail', '-fn0', '/var/log/authlog'],
filters: {
failedlogin: {
regex: [
// Auth fail
@'Failed password for invalid user .* from <ip>',
// Client disconnects during authentication
@'Disconnected from invalid user .* <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
Depending on the Linux distributions (or other UNIX systems), your OpenSSH logs may vary.
Check yourself what logs are printed by your SSH server!
Traefik
Configuration for the Traefik web server.
Traefik most often logs to stdout. If using Docker, it will be accessible using docker logs -n0 -f <traefik_container_name>
.
You can configure other ways to log traefik, see its documentation.
By default, Traefik logs to the Common Log Format, which is described in this section.
But its log format is often configured to json
, which gives much more detailed logs. That's what we'll describe here.
When logging using the json format, all is printed on one line, allowing for easy regex parsing.
Here's what it looks like pretty printed:
{
"ClientAddr": "1.2.3.4:2048",
"ClientHost": "1.2.3.4",
"ClientPort": "2048",
"DownstreamContentSize": 252,
"DownstreamStatus": 200,
"Duration": 1000,
"OriginContentSize": 252,
"OriginDuration": 900,
"OriginStatus": 206,
"Overhead": 10000,
"RequestAddr": "domain.name",
"RequestContentSize": 0,
"RequestCount": 123,
"RequestHost": "domain.name",
"RequestMethod": "GET",
"RequestPath": "/login",
"RequestPort": "-",
"RequestProtocol": "HTTP/2.0",
"RequestScheme": "https",
"RetryAttempts": 0,
"RouterName": "my-service@docker",
"ServiceAddr": "172.1.0.1:80",
"ServiceName": "my-service@docker",
"downstream_Header1": "...",
"downstream_Header2": "...",
"entryPointName": "websecure",
"level": "info",
"msg": "",
"origin_Header1": "...",
"origin_Header2": "...",
"request_Header1": "...",
"request_Header2": "...",
"time": "YYYY-MM-DDTHH:MM:SS+UT:C0"
}
As a pattern, we'll use ip. See here.
A regex for traefik can look like this:
@'.*,"ClientHost":"<ip>",.*,"DownstreamStatus":401,.*,"RequestPath":"/login".*'
Or this:
@'.*,"ClientHost":"<ip>",.*,"DownstreamStatus":401,.*,"RequestHost":"domain.name",.*,"RequestPath":"/login".*'
Adjust domain.name
according to your domain
Example:
{
streams: {
traefik: {
cmd: ['tail', '-n0', '-f', '/var/lib/traefik/access.log'],
filters: {
website: {
regex: [ @',"ClientHost":"<ip>",.*,"DownstreamStatus":403,.*,"RequestHost":"website.example",.*,"RequestPath":"/login",' ],
retry: 3,
retryperiod: '3h',
actions: banFor('24h'),
},
},
},
},
}
You can decide that all 401
, Unauthorized, and 403
, Forbidden, are suspicious, and have a filter for any 401 and 403:
{
streams: {
traefik: {
cmd: ['docker', 'logs', '-n0', '-f', 'traefik'],
filters: {
website: {
regex: [ @',"ClientHost":"<ip>",.*,"DownstreamStatus":(401|403),' ],
retry: 15,
retryperiod: '5m',
actions: banFor('1h'),
},
},
},
},
}
It can be very useful to add User-Agent
header value in traefik logs. By default, it is dropped. So assuming you are using a traefik.toml
file, please allow User-Agent
[accessLog]
filePath = "access.log"
bufferingSize = 100
format = "json" # it is easier to parse than flat format
[accessLog.fields]
defaultMode = "keep"
[accessLog.fields.headers]
defaultMode = "drop"
[accessLog.fields.headers.names]
"User-Agent" = "keep"
Web crawlers
Configuration to ban malicious Web crawlers. Here the idea is that most attackers will first try to scan what to attack on a server.
We stick to paths no unmalicious human should try by themselves.
List:
/.env
/password.txt
/passwords.txt
/config\.json
- Rationale: .env and password(s).txt, config.json are often searched by bots, as they can contain sensitive information, such as database credentials. Do not include the third path if a client must retrieve a config.json file.
/info\.php
- Rationale: info.pgp is a file often written for debugging purposes, which contains
<?php phpinfo() ?>
. This function exposes way too much information about the PHP environment, which is very useful when looking for security holes.
- Rationale: info.pgp is a file often written for debugging purposes, which contains
/wp-login\.php
/wp-includes
- Rationale: Wordpress default authentication path. Do not include if you use Wordpress.
/owa/auth/logon.aspx
- Rationale: Outlook authentication path. Do not include if Outlook is in use on your infrastructure.
/auth.html
/auth1.html
- Rationale: I don't know what it is, but it has been tried by numerous bots on my webserver. Do not include if you use this path on your infrastructure.
/dns-query
- Rationale: DOH (DNS Over HTTPS) standard path. Do not include if have a DOH server on your infrastructure.
/\.git/
- Rationale: Often looking for secrets in .git/config, etc. Do not include if you host a Git forge.
(Feel free to add your own discoveries to this list!)
By adding (?:[^/" ]*/)
at the beginning of each regex, we also cover all subpaths.
As a pattern, we'll use ip. See here.
Example:
{
streams: {
nginx: {
cmd: ['...'], // see ./nginx.md
filters: {
slskd: {
regex: [
// (?:[^/" ]*/)* is a "non-capturing group" regex that allow for subpath(s)
// example: /code/.env should be matched as well as /.env
// ^^^^^
@'^<ip> .*"GET /(?:[^/" ]*/)*\.env ',
@'^<ip> .*"GET /(?:[^/" ]*/)*password.txt ',
@'^<ip> .*"GET /(?:[^/" ]*/)*passwords.txt ',
@'^<ip> .*"GET /(?:[^/" ]*/)*config\.json ',
@'^<ip> .*"GET /(?:[^/" ]*/)*info\.php ',
@'^<ip> .*"GET /(?:[^/" ]*/)*wp-login\.php',
@'^<ip> .*"GET /(?:[^/" ]*/)*wp-includes',
@'^<ip> .*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx ',
@'^<ip> .*"GET /(?:[^/" ]*/)*auth.html ',
@'^<ip> .*"GET /(?:[^/" ]*/)*auth1.html ',
@'^<ip> .*"GET /(?:[^/" ]*/)*dns-query ',
@'^<ip> .*"GET /(?:[^/" ]*/)*\.git/',
],
actions: banFor('720h'),
},
},
},
},
}
Common Log Format
The common log format is supported by most webservers.
<remote_IP_address> - <client_user_name_if_available> [<timestamp>] "<request_method> <request_path> <request_protocol>" <HTTP_status> <content-length> "<request_referrer>" "<request_user_agent>" <number_of_requests_received_since_webserver_started> "<router_name>" "<server_URL>" <request_duration_in_ms>ms
Examples of reaction log regexes with a webserver that uses the CLF:
@'^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ "https://domain.name/.*'
// ^ ^ ^ ^ ^ ^
// IP Method Path | Status Code Domain
// |
// HTTP version is ignored
@'^<ip> .* "(GET|POST) /login HTTP/..." 401 '
// ^ ^ ^ ^
// IP Method Path Status Code
Traefik's JSON log format has its own documentation
Actions
Here, you will find examples of actions with different programs.
- AbuseIP DB reporting
- firewalld
- iptables
- nftables
- PacketFilter
- PostgreSQL
- SMS alerting with FreeMobile
Reporting to AbuseIPDB
AbuseIPDB is a collaborative platform that allows its users to report and check IP addresses associated with malicious activities, helping to identify and mitigate potential threats in cybersecurity.
This page explains how to report bad IPs but NOT how to check incoming IPs reputation.
Requirements
- An account on AbuseIPDB
- reaction can access the internet (at least AbuseIPDB's servers)
Setup
Grant access to the API
- Create an API token here
- Put it inside a file accessible by reaction
- Make sure only reaction (or root) can access it
Typical example:
echo "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" | sudo tee /etc/reaction/abuseip.key
sudo chown reaction:reaction /etc/reaction/abuseip.key
sudo chmod 400 /etc/reaction/abuseip.key
Configure reaction
In your reaction.jsonnet, add this:
local abuseip_params = {
badbots: {
category: "19",
comment: "Did not follow robots.txt directives",
},
sshbruteforce: {
category: "18,22",
comment: "Bruteforced SSH server",
},
webattack: {
category: "21",
comment: "Requested unexistent endpoint (Wordpress login, etc.)",
},
};
local report(type) = {
abuseIP: {
cmd: ['curl',
'--fail', '--silent', '--show-error',
'https://api.abuseipdb.com/api/v2/report',
'--variable', 'API_KEY@/etc/reaction/abuseip.key',
'--header', 'Accept: application/json',
'--expand-header', 'Key: {{API_KEY:trim}}',
'--data-urlencode', 'comment=' + abuseip_params[type].comment,
'--data-urlencode', 'ip=<ip>',
'--data', 'categories=' + abuseip_params[type].category,
],
// do not run again on reaction restart
oneshot: true,
},
};
Usage
You can use the report('type')
, in combination with banFor
if you want to.
The type must be declared in abuseip_params
, refer to Advanced configuration if needed.
Ex:
// [...]
actions: banFor('24h') + report('webattack'),
Advanced configuration
You may edit the abuseip_params
variable to add relevant categories and descriptions.
The reference is here: https://www.abuseipdb.com/categories
Comments supports variables, thus this is a valid configuration:
local banFor(time) = {
// [...]
};
local abuseip_params = {
webattack: {
category: "21",
comment: "HTTP <method> on a non-existent endpoint (Wordpress)",
},
};
local report(type) = {
abuseIP: {
cmd: ['curl',
'--fail', '--silent', '--show-error',
'https://api.abuseipdb.com/api/v2/report',
'--variable', 'API_KEY@/etc/reaction/abuseip.key',
'--header', 'Accept: application/json',
'--expand-header', 'Key: {{API_KEY:trim}}',
'--data-urlencode', 'comment=' + abuseip_params[type].comment,
'--data-urlencode', 'ip=<ip>',
'--data', 'categories=' + abuseip_params[type].category,
],
// do not run again on reaction restart
oneshot: true,
},
};
{
patterns: {
ip: {
// see Patterns section
},
method: {
regex: 'GET|POST|PUT|HEAD',
}
},
start: [
// [...],
],
stop: [
// [...],
],
streams: {
web: {
cmd: [ 'journalctl', '-fu', 'haproxy.service' ],
filters: {
scanners: {
regex: [
// wordpress
@': <ip>:\d+ .+<method> .*/wp-login\w+',
],
actions: banFor('720h') + report('webattack'),
},
},
},
},
}
firewalld
The proposed way to ban IPs using firewalld uses one reaction
zone.
The IPs are banned on all ports, meaning banned IPs won't be able to connect on any service.
Start/Stop
We first need to create this zone on startup.
{
start: [
// create the new zone
['firewall-cmd', '--permanent', '--new-zone', 'reaction'],
// set its target to DROP
['firewall-cmd', '--permanent', '--set-target', 'DROP', '--zone', 'reaction'],
// reload firewalld to be able to use the new zone
['firewall-cmd', '--reload'],
],
}
We want reaction
to remove it when quitting:
{
stop: [
// remove the zone
['firewall-cmd', '--permanent', '--delete-zone', 'reaction'],
// reload firewalld
['firewall-cmd', '--reload'],
],
}
Ban/Unban
Now we can ban an IP with this command:
{
cmd: ['firewall-cmd', '--zone', 'reaction', '--add-source', '<ip>'],
}
And unban the IP with this command:
{
cmd: ['firewall-cmd', '--zone', 'reaction', '--remove-source', '<ip>']
}
A good practice is to wrap the actions in a function with parameters:
local banFor(time) = {
ban: {
cmd: ['firewall-cmd', '--zone', 'reaction', '--add-source', '<ip>'],
},
unban: {
cmd: ['firewall-cmd', '--zone', 'reaction', '--remove-source', '<ip>']
after: time,
},
};
See how to merge different actions in JSONnet FAQ
Real-world example
local banFor(time) = {
ban: {
cmd: ['firewall-cmd', '--zone', 'reaction', '--add-source', '<ip>'],
},
unban: {
after: time,
cmd: ['firewall-cmd', '--zone', 'reaction', '--remove-source', '<ip>']
},
};
{
patterns: {
// IPs can be IPv4 or IPv6
// ip46tables (C program also in this repo) handles running the good commands
ip: {
regex: '...', // See patterns.md
},
},
start: [
['firewall-cmd', '--permanent', '--new-zone', 'reaction'],
['firewall-cmd', '--permanent', '--set-target', 'DROP', '--zone', 'reaction'],
['firewall-cmd', '--reload'],
],
stop: [
['firewall-cmd', '--permanent', '--delete-zone', 'reaction'],
['firewall-cmd', '--reload'],
],
streams: {
// Ban hosts failing to connect via ssh
ssh: {
cmd: ['journalctl', '-fn0', '-u', 'sshd.service'],
filters: {
failedlogin: {
regex: [
@'authentication failure;.*rhost=<ip>',
@'Connection reset by authenticating user .* <ip>',
@'Failed password for .* from <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
iptables
The proposed way to ban IPs using iptables uses one reaction
chain.
The IPs are banned on all ports, meaning banned IPs won't be able to connect on any service.
⚠️ This part of the doc refers to ip46tables which is deprecated. See example config for the
ipv4only
/ipv6only
feature that permits to get rid of it.
We use the ip46tables
binary included alongside reaction
, which permits to support both IPv4 and IPv6.
Start/Stop
We first need to create this chain on startup, and add it at the beginning of the INPUT
iptables chain.
Docker & LXD users will need to add this rule to the FORWARD
chain as well.
{
start: [
// create the `N`ew chain
['ip46tables', '-w', '-N', 'reaction'],
// `I`nsert the chain at the beginning of INPUT & FORWARD
['ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction'],
],
}
We want reaction
to remove it when quitting:
{
stop: [
// `D`elete it from INPUT & FORWARD
['ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction'],
// `F`lush it (delete all the items in the chain)
['ip46tables', '-w', '-F', 'reaction'],
// Remove it completely
['ip46tables', '-w', '-X', 'reaction'],
],
}
Ban/Unban
Now we can ban an IP with this command:
{
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP']
// or
cmd: [ 'sh', '-c', 'ip46tables -w -A reaction -s <ip> -j DROP']
}
And unban the IP with this command:
{
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP']
// or
cmd: [ 'sh', '-c', 'ip46tables -w -D reaction -s <ip> -j DROP']
}
A good practice is to wrap the actions in a function with parameters:
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: time,
},
};
See how to merge different actions in JSONnet FAQ
Real-world example
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
after: time,
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
};
{
patterns: {
// IPs can be IPv4 or IPv6
// ip46tables (C program also in this repo) handles running the good commands
ip: {
regex: '...', // See patterns.md
},
},
start: [
['ip46tables', '-w', '-N', 'reaction'],
['ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction'],
],
stop: [
['ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-F', 'reaction'],
['ip46tables', '-w', '-X', 'reaction'],
],
streams: {
// Ban hosts failing to connect via ssh
ssh: {
cmd: ['journalctl', '-fn0', '-u', 'sshd.service'],
filters: {
failedlogin: {
regex: [
@'authentication failure;.*rhost=<ip>',
@'Connection reset by authenticating user .* <ip>',
@'Failed password for .* from <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
nftables
The proposed way to ban IPs with nftables uses its own reaction
table.
Inside, there are two sets and two rules.
One set/rule couple is for IPv4 and the other one is for IPv6.
The IPs are banned on all ports, meaning banned IPs won't be able to connect on any service of the host.
We don't make use of
nftables
timeouts because we need reaction to handle the lifecycle of a ban. If you choose to unban withnftables
timeouts, you won't have access to all of reaction features, as it won't know what's currently banned, nor how to unban an IP: showing bans withreaction show
and unbanning withreaction flush
can't be supported.
⚠️ There is no chain for forwarded packets, so Docker containers (for example) are unprotected! Any contribution welcome to add this forward chain here. See #85.
Start/Stop
We create the table with relevant rules and filters
{
start: [
['nft', |||
table inet reaction {
set ipv4bans {
type ipv4_addr
flags interval
auto-merge
}
set ipv6bans {
type ipv6_addr
flags interval
auto-merge
}
chain input {
type filter hook input priority 0
policy accept
ip saddr @ipv4bans drop
ip6 saddr @ipv6bans drop
}
}
||| ],
],
}
We want reaction
to delete all its setup when quitting:
{
stop: [
['nft', 'delete table inet reaction'],
],
}
🚧 auto-merge has been reported not to work well with nftables < 1.0.7
Ban/Unban
IPv4
Now we can ban an IPv4 address with this command:
{
cmd: ['nft', 'add element inet reaction ipv4bans { <ipv4> }']
}
And unban the IP with this command:
{
cmd: ['nft', 'delete element inet reaction ipv4bans { <ipv4> }']
}
IPv6
IPv6 works the same way:
{
cmd: ['nft', 'add element inet reaction ipv6bans { <ipv6> }']
}
{
cmd: ['nft', 'delete element inet reaction ipv6bans { <ipv6> }']
}
IPv4/IPv6
⚠️ This part of the doc refers to nft46 which is deprecated. See example config for the
ipv4only
/ipv6only
feature that permits to get rid of it.
A very small utility, nft46
, has been written to unify ipv4 and ipv6 commands:
{
cmd: ['nft46', 'add element inet reaction ipvXbans { <ip> }']
}
{
cmd: ['nft46', 'delete element inet reaction ipvXbans { <ip> }']
}
The X
in the command will be changed to 4 or 6 at runtime depending on the IP provided.
There must be a X
before the curly brackets, then this sequence: {
, at least one space, exactly one IP (v4 or v6), at least one space, a }
.
You can do it!
Wrapping this in a reusable JSONnet function
local banFor(time) = {
ban: {
cmd: ['nft46', 'add element inet reaction ipvXbans { <ip> }'],
},
unban: {
cmd: ['nft46', 'delete element inet reaction ipvXbans { <ip> }'],
after: time,
},
};
Real-world example
local banFor(time) = {
ban: {
cmd: ['nft46', 'add element inet reaction ipvXbans { <ip> }'],
},
unban: {
cmd: ['nft46', 'delete element inet reaction ipvXbans { <ip> }'],
after: time,
},
};
{
patterns: {
ip: {
regex: '...', // See patterns.md
},
},
start: [
['nft', |||
table inet reaction {
set ipv4bans {
type ipv4_addr
flags interval
auto-merge
}
set ipv6bans {
type ipv6_addr
flags interval
auto-merge
}
chain input {
type filter hook input priority 0
policy accept
ip saddr @ipv4bans drop
ip6 saddr @ipv6bans drop
}
}
||| ],
],
stop: [
['nft', 'delete table inet reaction'],
],
streams: {
// Ban hosts failing to connect via ssh
ssh: {
cmd: ['journalctl', '-fn0', '-u', 'sshd.service'],
filters: {
failedlogin: {
regex: [
@'authentication failure;.*rhost=<ip>',
@'Connection reset by authenticating user .* <ip>',
@'Failed password for .* from <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
OpenBSD PacketFilter
The proposed way to ban IPs using openBSD pf uses one t_reaction
table.
We first need to create this table on our main /etc/pf.conf
file.
table <t_reaction> persist
The IPs are banned on all ports, meaning banned IPs won't be able to connect on any service.
Start/Stop
There is no specific action taken on start. On stop, all IP address contained in 't_reaction' will be flushed
start: [
],
stop: [
['pfctl', '-t', 't_reaction', '-T', 'flush', '<ip>'],
],
Ban/Unban
Then, in reaction.conf
file, we need to specify pfctl
behaviour and alter ban
and unban
command
local iptables(args) = [ 'pfctl'] + args;
local banFor(time) = {
ban: {
cmd: ['pfctl', '-t', 't_reaction', '-T', 'add', '<ip>'],
},
unban: {
after: time,
cmd: ['pfctl', '-t', 't_reaction', '-T', 'del', '<ip>'],
},
};
See how to merge different actions in JSONnet FAQ
Real-world example
local banFor(time) = {
ban: {
cmd: ['pfctl', '-t', 't_reaction', '-T', 'add', '<ip>'],
},
unban: {
after: time,
cmd: ['pfctl', '-t', 't_reaction', '-T', 'del', '<ip>'],
},
};
{
patterns: {
ip: {
regex: @'(?:(?:[ 0-9 ]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
},
},
start: [
],
stop: [
['pfctl', '-t', 't_reaction', '-T', 'flush', '<ip>'],
],
streams: {
ssh: {
cmd: [ 'tail', '-n0', '-f', '/var/log/authlog' ],
filters: {
failedlogin: {
regex: [
// Auth fail
@'Failed password for invalid user .* from <ip>',
// Client disconnects during authentication
@'Disconnected from invalid user .* <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
},
}
Inserting IPs in a PostgreSQL table
It can be tempting to just use the psql
CLI, but using it with untrusted inputs can lead to SQL injection.
Better be safe and use SQL prepared statements, that ensure no input can result in arbitrary SQL statements.
We'll use the Deno javascript runtime, that automatically downloads dependencies on first use.
Here we assume that we can connect to a local database which has this table:
CREATE TABLE ips(
ip VARCHAR(45),
time TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
We can then run this script:
insert.js
import postgres from "npm:postgres";
// We get the first CLI argument
const ip = Deno.args[0];
// See https://www.npmjs.com/package/postgres#connection
// For connection options
const sql = postgres({
path: "/run/postgresql/.s.PGSQL.5432"
});
// Parameters are automatically extracted and handled by the database
// so that SQL injection isn't possible.
// Any generic value will be serialized according to an inferred type,
// and replaced by a PostgreSQL protocol placeholder $1, $2, ....
// The parameters are then sent separately to the database which
// handles escaping & casting.
await sql`
INSERT INTO ips (ip) VALUES(${ ip })
`;
await sql.end();
With the following command:
deno run -A /path/to/insert.js 1.2.3.4
Example
Here's an example reaction configuration using it:
local psql_insert = {
psql_insert: {
cmd: [ "deno", "run", "-A", "/path/to/insert.js", "<ip>" ],
},
};
local banFor(time) = {
// firewall configuration
};
{
// ...
streams: {
mystream: {
// ...
filters: {
myfilter: {
// ...
actions: psql_insert + banFor('48h'),
},
},
}
},
}
SMS alerting with Free Mobile
🇬🇧 As Free Mobile is a French Mobile/Internet Access Provider, this wiki entry is in French.
🥐 Free Mobile propose une API très simple pour s'envoyer des SMS à soi-même, quand on a un abonnement mobile.
Il faut d'abord activer les Notifications par SMS sur son espace client.
On reçoit une clé d'API, qu'on appellera PASS.
Il suffit ensuite d'envoyer une requête HTTP GET :
https://smsapi.free-mobile.fr/sendmsg?user=USER&pass=PASS&msg=MSG
Il serait donc possible de faire un simple cURL :
curl https://smsapi.free-mobile.fr/sendmsg?user=12345678&pass=abcdefghijlkmnop&msg=coucou
Cependant :
- Comme le paramètre
msg
est dans l'URL, il doit être encodé avec les%20
caractères qui vont bien. - Si on veut garder
reaction.jsonnet
dans un repository Git, on préfèrera garder les secrets (user
,pass
) dans des fichiers externes lisibles seulement parroot
(ou parreaction
, si vous lui avez créé un user spécfique)
cURL permet de faire tout ça en une commande, donc pas besoin de faire un script shell.
On utilise l'option --variable
ajoutée dans curl 8.3.0, qui date de Septembre 2023.
Il se peut que votre distribution ne l'ait pas encore packagée.
Sans plus attendre, voilà une fonction JSONnet avec la commande cURL complète :
local sendsms(message) = {
sendsms: {
cmd: [
"${pkgs.curl}/bin/curl",
// Retourner un code d'erreur si le code de retour HTTP indique une erreur
"--fail",
// Ne rien afficher par défaut
"--silent",
// Quand même afficher les erreurs
"--show-error",
// Stocker dans la variable USER le contenu de /var/secrets/mobileapi-user
"--variable", "USER@/var/secrets/mobileapi-user",
// Stocker dans la variable PASS le contenu de /var/secrets/mobileapi-pass
"--variable", "PASS@/var/secrets/mobileapi-pass",
// Stocker dans la variable MSG le contenu du message
"--variable", "MSG=" + message,
// Enlever les espaces et retours à la ligne des USER et PASS
// Encoder au format URL le MSG à envoyer
"--expand-url", "https://smsapi.free-mobile.fr/sendmsg?user={{USER:trim}}&pass={{PASS:trim}}&msg={{MSG:trim:url}}",
],
},
};
Exemples de la vraie vie
Quand le service myservice
affiche une erreur, me l'envoyer par SMS.
local sendsms(message) = {
sendsms: {
cmd: [
"${pkgs.curl}/bin/curl",
"--fail",
"--silent",
"--show-error",
"--variable", "USER@/var/secrets/mobileapi-user",
"--variable", "PASS@/var/secrets/mobileapi-pass",
"--variable", "MSG=" + message,
"--expand-url", "https://smsapi.free-mobile.fr/sendmsg?user={{USER:trim}}&pass={{PASS:trim}}&msg={{MSG:trim:url}}",
],
},
};
{
patterns: {
untilEOL: '.*$'
},
streams: {
myservice: {
cmd: ['journalctl', '-fn0', '-u', 'myservice.service'],
filters: {
errors: {
regex: [ @'ERROR <untilEOL>' ],
actions: sendsms('<untilEOL>'),
},
},
},
},
}
Quand on bannit une IP, me l'envoyer par SMS aussi.
local sendsms(message) = {
sendsms: {
cmd: [
"${pkgs.curl}/bin/curl",
"--fail",
"--silent",
"--show-error",
"--variable", "USER@/var/secrets/mobileapi-user",
"--variable", "PASS@/var/secrets/mobileapi-pass",
"--variable", "MSG=" + message,
"--expand-url", "https://smsapi.free-mobile.fr/sendmsg?user={{USER:trim}}&pass={{PASS:trim}}&msg={{MSG:trim:url}}",
],
},
};
local banFor(time) = {
// la configuration de son firewall qui va bien
};
{
patterns: {
untilEOL: '.*$'
},
streams: {
myservice: {
cmd: ['journalctl', '-fn0', '-u', 'myservice.service'],
filters: {
errors: {
regex: [ @'ERROR <untilEOL>' ],
actions: banFor('48h') + sendsms('<ip> banned for 48h'),
},
},
},
},
}
Configurations
Here, you will find examples of full configurations.
- Configs of ppom
- Configs of ppom (nixos) (external)
- Configs of Eldeberen (ssh, grafana, haproxy, influxdb reporting) (external)
- Configs of Raoull (nginx antibot regexes)
- Docker usage from La Contrevoie (external)
- OpenBSD Config
Articles
English
- https://blog.ppom.me/en-reaction-v2/
- https://blog.ppom.me/en-reaction/
- https://lobste.rs/s/07u3nq/reaction_replacement_fail2ban
- https://repology.org/project/reaction-fail2ban/versions
- https://opticality.com/blog/2023/12/31/reaction-vs-fail2ban-vs-crowdsec/