Introduction
Welcome to the documentation for Hammerhead. This book will cover (almost) everything you may need to know about Hammerhead, including history, configuration, and tips & tricks.
Some quick links:
-
Source Code (
MPL-2.0)First-party mirrors (read only):
About
Hammerhead is a nimble Matrix homeserver written in Golang, utilising the mautrix-go SDK. It is being built from the ground up - not a fork of an existing project. It is being built with being a power-user tool in mind, however the end goal is a usable implementation that can comfortably be used day-to-day.
If you want a daily-driver homeserver that is ready to go right now, I recommend continuwuity, a project from the same maintainers, written in Rust.
Demo instance
There is a demo instance available at hammerhead.nexy7574.co.uk. Please note that this instance is not suitable for
casual or personal use - there may be extreme restrictions on available resources, and it is often running unstable
versions of hammerhead, and the database is frequently wiped without notice.
If you want to jump on for a quick test drive, you can use a client like Element Web, Cinny, Sable, or Commet, although you can likely use whatever your existing favourite client is.
The registration token is the Codeberg repo’s HTTPS clone URL. No, you can not have admin.
Getting Started
Installing Hammerhead is relatively simple, but you need to be confident with managing a server to get the most out of it. This guide will walk you through the minimum steps required to get a working installation of Hammerhead.
This part of the guide is very in depth - consider using the TOC in the sidebar to help you navigate.
Prerequisites
To compile the server (for when you aren’t using a pre-built binary), you will need:
- The latest version of Go.
- A compatible Linux operating system: Debian-based, Arch, Fedora. Others may work but are not tested.
gitandbashin your$PATH.gccis NOT currently required.
To run the server, you will need:
- PostgreSQL v14 or newer (check for issues regarding newly released PG versions).
To run the server, you will also want:
Important
Once you have set your “server name”, it cannot be changed. This means, if you start off without a domain, and later want to switch to using one, you will have to delete your database and start fresh.
Compiling
If you wish to compile Hammerhead (instead of using the binaries created by CI or attached to releases), you can do so
with the handy build.sh script located at the root of this repository. While using this script in particular is not
required (compiling with plain go build is sufficient), the build script conveniently injects build metadata into the
binary for accurate version reporting, and offers shorthands to compile static and release-optimised builds.
Note
You do not need to compile Hammerhead to run it. CI produces static binaries for AMD64 and ARM64 on each push to
dev, and static binaries are also attached to each release.Unless you need a dynamic binary, or are planning on hacking on Hammerhead, you likely do not need to compile.
Please skip to Installing if you just want to install Hammerhead.
With the build script
To get started, make sure you have the prerequisites detailed above. Then, you can clone the repository:
git clone https://codeberg.org/timedout/hammerhead.git && cd hammerhead
And then run the build script to produce a binary at ./bin/hammerhead:
./build.sh
#- will build dynamically-linked binary
#- will build debug binary (without optimizations)
#- Tag associated with the latest commit: N/A
#- Latest tag: N/A
#- Latest commit hash: ec0f1d0
#- Dirty? yes
#- Build date: 2026.03.09T19.48.00Z
#- Golang version: go1.26.0
#- OS/Arch: linux/amd64
#
#Compiling hammerhead
#...
#codeberg.org/timedout/hammerhead/cmd/hammerhead
#
#real 0m1.941s
#user 0m1.779s
#sys 0m0.785s
Notice how the first two lines say will build dynamically-linked binary and
will build debug binary (without optimizations)? This is because by default, the script will create a debug-oriented
build. While this is still plenty fast, it is dynamically linked, and retains debug symbols, meaning it’s a few
mebibytes larger than necessary for most people. If you aren’t planning on running through Hammerhead with a debugger,
you may wish to instead build a high-performance and/or static binary.
To compile a static binary, pass -static to build.sh:
./build.sh -static && file ./bin/hammerhead
#- will build static-linked binary without CGO (experimental!)
#- will build debug binary (without optimizations)
# ...
#./bin/hammerhead: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=..., BuildID[sha1]=..., with debug_info, not stripped
To compile a “release” binary (one without debug symbols), pass -release:
./build.sh -release && file ./bin/hammerhead
#- will build dynamically-linked binary
#- will build release binary (with optimizations)
#...
#./bin/hammerhead: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=..., BuildID[sha1]=..., stripped
You can even combine these flags (in any order) to compile a release+static binary:
./build.sh -static -release && file ./bin/hammerhead
#- will build static-linked binary without CGO (experimental!)
#- will build release binary (with optimizations)
#./bin/hammerhead: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=..., BuildID[sha1]=..., stripped
Without the build script
If for some reason you are unable to use the build script (consider opening an issue!), you can still compile without:
go build -o ./bin/ ./cmd/hammerhead
Note that this will produce a debug dynamic binary without any metadata. You will likely be unable to report bugs found when running this binary as it will not contain version data required to accurately troubleshoot problems.
Installing
In order to install Hammerhead, you will first need a binary. These can be acquired from:
Please note that CI artifacts are zipped when downloaded - you will first need to unzip them before you can get the
binary within.
Make sure you select the correct binary by checking the output of uname -m:
| Output | Version |
|---|---|
x86_64 | AMD64 |
aarch64 | ARM64 |
Other architectures may work, but are not built in CI, nor tested. Downloading the wrong binary will result in
an error message like exec format error: ./hammerhead-arm64 when you try to run it.
Configuring
Important
The rest of this guide assumes your binary (from compiling or installing) is located at
/usr/local/bin/hammerhead, and that/usr/local/binis in your$PATH. It also assumes you have read+write+execute access to/etc/hammerhead.
Before you daemonise Hammerhead, you’re going to want to create the template configuration file. When Hammerhead is unable to load a configuration file, it will create one with some default values, so we’re going to use this to create a sparse config file that you can modify with your values.
First, create the configuration file by running hammerhead -config /etc/hammerhead/config.yaml. Upon success,
this will create a YAML file at that location with some example values. You must edit at least:
server_name- set this to your server name (the part after the:in your user IDs).database.url- set this to the connection URI of your postgres server. Must contain the username, password, host, and database name for the connection.
Caution
After you start the server for the first time the
server_namecannot be changed. Make sure you’re certain before you hit run!
You may also wish to modify the other options, such as the registration.token (to change it to one of your
choosing), changing the password requirements, increasing the max_request_bytes (to allow larger file uploads),
or modifying the listeners.
A configuration reference does not yet exist but will be added at a later date.
Importing a previous installation’s signing key
It is possible to import another installation’s server signing key, provided you have it in a file with the format of
ed25519 KeyID PrivateKeyPart, for example: ed25519 a_bcde hnBjtF9w9AQAWfnhAHIV3fdu9QH0YX1xWlb0qEPjE4w
You can then import this key via hammerhead -import-signing-key path/to/file. Once you confirm you want to import it,
it will be imported as an “expired” key, meaning it can no longer be used to sign new events. It can, however, still be
used to verify events sent from before you set up Hammerhead.
Tip
If, for some reason, you wish to continue using the imported key to sign new events, you will need to manually do so via postgres:
UPDATE owned_signing_keys SET expired_ts = NULL;.While technically supported, you should not have multiple active signing keys - if Hammerhead automatically generated one, you should invalidate it before continuing with
hammerhead -invalidate-signing-key 'KEY_ID'. Again, once a key is expired, it cannot be used ever again.
Exporting your current key
If you wish to migrate your deployment to another homeserver implementation, you can export your active signing key into
the synapse format with hammerhead -export-signing-key. This does not expire the key, so it can be re-used as an
active key in another deployment (as long as it has the same server name).
Starting the server
To start Hammerhead after configuring it, you can just run the binary like you did the first time:
hammerhead -config /etc/hammerhead/config.yaml
#=== Hammerhead v0.0.1-dev ===
#Parsing command line flags...
#Loading configuration from /etc/hammerhead/config.yaml...
#Initialising logging...
#Dropping output into logging system...
#2026-03-09T19:84:00Z DBG Validating configuration
#2026-03-09T19:84:00Z INF Initialising server
#2026-03-09T19:84:00Z INF running database migrations
#2026-03-09T19:84:00Z INF finished running database migrations elapsed_ms=181
#2026-03-09T19:84:00Z WRN no signing key found, generating new one. If this was not expected, I hope you have a backup of the one you expected.
#2026-03-09T19:84:00Z INF generated new signing key key_id=ed25519:vgJGqQ
#2026-03-09T19:84:00Z INF loaded signing key key_id=ed25519:vgJGqQ
#2026-03-09T19:84:00Z INF initialising media repository
#2026-03-09T19:84:00Z INF media repository initialised elapsed_ms=2
#2026-03-09T19:84:00Z INF Passing off server startup to server instance
#2026-03-09T19:84:00Z INF Starting server on
#2026-03-09T19:84:00Z INF server started. ^C to shut down.
#2026-03-09T19:84:00Z INF starting listener address=:8008 tls=false
If your configuration is malformed in such a way that Hammerhead would not be able to operate, starting the server will fail, and it will tell you what you need to change.
As soon as you see server started. and starting listener, your server is ready to go! Keep it running like this
for the next step.
Creating your first user
After you’ve started the server, there will be no users. Open a client such as Element Web, Cinny, Sable, or Commet, and plug in your server name. From there, go to “register”, and put in the username and password you desire.
You will likely be challenged to supply the token that is under the registration section of your configuration.
This is to prevent automated scripts or abusive users from finding your server, and registering accounts without your
supervision.
Upon registering the first user, you will be granted admin, which allows you to control things that happen on your deployment via the admin API. TODO: Admin API reference.
Any users registered after the first will simply be standard users who have no special control or rights on the server.
Tip
You can mark anyone as an admin by updating their admin status via postgres:
UPDATE accounts SET admin = true WHERE localpart = 'username_here';(NOTE: username, not user ID)
Daemonising
In order to run Hammerhead 24/7, you will want to first interrupt the running server by hitting CTRL+C (running concurrent instances of Hammerhead is not safe), and create a systemd unit file like below:
# /etc/systemd/system/hammerhead.service
[Unit]
Description=Hammerhead Matrix homeserver
Documentation=https://timedout.codeberg.page/hammerhead
Requires=network.target
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=hammerhead
Group=hammerhead
ExecStart=/usr/local/bin/hammerhead
WorkingDirectory=/etc/hammerhead
Environment=HAMMERHEAD_CONFIG=/etc/hammerhead/config.yaml
Restart=on-failure
RestartSec=2
RestartSteps=5
RestartMaxDelaySec=1m
StartLimitIntervalSec=1m
StartLimitBurst=5
[Install]
WantedBy=multi-user.target
You will also need to create a user and group for Hammerhead to use this systemd unit, which can be achieved on most
Linux distributions like so: adduser --system --group --home /etc/hammerhead hammerhead. Make sure you
chown -R hammerhead:hammerhead /etc/hammerhead and chmod 755 /usr/local/bin/hammerhead so that Hammerhead can
read & write in its site directory, and execute the binary.
You can then systemctl daemon-reload to init the unit file, and then use systemctl enable --now hammerhead.service
to both enable the service at startup, and also start it immediately. Once start returns, you can check the status
and most recent log lines of Hammerhead by running systemctl status hammerhead.service:
systemctl status hammerhead.service
#● hammerhead.service - hammerhead
# Loaded: loaded (/etc/systemd/system/hammerhead.service; enabled; preset: enabled)
# Active: active (running) since Mon 2026-03-09 19:84:00 GMT; 0s ago
# Main PID: 129208 (hammerhead)
# Tasks: 9 (limit: 57189)
# Memory: 8.6M (peak: 10.6M)
# CPU: 23ms
# CGroup: /system.slice/hammerhead.service
# └─129208 /usr/local/bin/hammerhead
#
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF Initialising server
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF running database migrations
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF finished running database migrations elapsed_ms=31
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF loaded signing key key_id=ed25519:lZ2Vwg
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF initialising media repository
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF media repository initialised elapsed_ms=3
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF Passing off server startup to server instance
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF Starting server on
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF server started. ^C to shut down.
#Mar 09 19:84:00 hammerhead-staging hammerhead[129208]: 2026-03-09T19:84:0Z INF starting listener address=0.0.0.0:8008 tls=false
Configuration Reference
Hammerhead uses YAML v1.2.2 for configuration. While not the prettiest option for configuration, other languages have been tried before and found to be even wors, so you’ll live (and likely also already know the syntax anyway).
The default configuration is generated at ./config.yaml (i.e. in the directory hammerhead is executed in).
The default configuration does not contain all possible values, but will contain all required values with at least
placeholder value.
This reference will list every config option available, with a description, and an example.
Required keys:
database
database (mapping, required): The configuration for the PostgreSQL database connection.
Examples:
database:
url: postgresql://user:password@hostname:port/dbname?sslmode=disable
database:
url: postgresql://user:password@hostname:port/dbname?sslmode=disable
max_idle_connections: 2
max_idle_lifetime: 5m
max_open_connections: 5
max_open_lifetime: 5m
database.url
url (string, required): The URI to connect to.
This string SHOULD be prefixed with postgresql://, but postgres:// will work for compatibility reasons.
This connection string is passed directly to the database driver, so you can configure other connection-related settings
in this URL (using libpq style query args -
see https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS for more info).
Example:
database:
url: postgresql://user:password@hostname:port/dbname?sslmode=disable
database.max_idle_connections
database.max_idle_connections (integer, optional): The maximum number of idle (not actively running a query)
connections to keep open in the connection pool.
See: max_open_connections
database.max_open_connections
database.max_open_connections (integer, optional): The maximum number of active (running a query) connections the
connection pool is allowed to have.
These options configure the maximum number of parallel connections available to Hammerhead. Generally, you shouldn’t need too many (2+2 may be a good starting point), and you should also be conscious of the additional resource usage incurred by having more postgres connections on the postgres server. More connections will allow for more concurrent operations, but you should only really be concerned about that if your server is exceptionally high traffic and you’re seeing lots of warnings about transactions taking a long time.
By default, there is no connection pooling - this is expected to change.
Examples:
database:
max_idle_connections: 2
database:
max_open_connections: 2
database:
max_idle_connections: 2
max_open_connections: 2
database.max_idle_lifetime
database.max_idle_connections (string (duration), optional): The maximum lifetime of an idle connection.
See: max_open_lifetime
database.max_open_lifetime
database.max_open_connections (string (duration), optional): The maximum lifetime of an active connection.
Controls how long connections live for before being destroyed. You can usually leave this disabled if you aren’t running a fancy postgres server configuration, but if you are, you probably know what these values should be anyway.
Example:
database:
max_idle_lifetime: 5m
database:
max_open_lifetime: 5m
database:
max_idle_lifetime: 5m
max_open_lifetime: 5m
debug
debug (boolean, optional): Enable or disable debug mode.
While you can still get debug logs and whatnot with this option disabled, setting debug: true will enable additional
runtime checks that may affect how the server operates. It is designed to be used with a step-through debugger in mind,
so sometimes some conditions that would usually be handled by logging an error will instead cause actual panics and
potentially crashes. An example of a side effect of enabling this option is that request handlers that don’t write a
response body will cause a crash - with debug disabled, this will log an error instead.
You usually only want to enable this if you are actively debugging Hammerhead.
Example:
debug: true
caches
caches (mapping, optional): Controls the sizes and lifetimes of some runtime caches.
Each cache map has two keys, max_entries, and max_ttl. max_entries (integer, optional) controls how many entries
can be in the cache before old ones start being evicted. By default, it is 4096 x CPU_CORE_COUNT. Set this to zero
to disable limiting the size of caches (not recommended).
max_ttl controls how long entries are allowed to remain in the cache before they are evicted. Set to zero to disable
TTL eviction.
Warning
TTL-based cache evictions are checked on each cache operation (both read and write), which means they are inherently more computationally expensive than simple size-based limits.
On the other hand, size-based eviction is only evaluated on write operation, making them generally cheaper, but may result in Hammerhead holding on to memory purely for “stale” cache entries.
It is recommended you leave caches as their default values unless you are encountering memory constraint issues.
Examples:
caches:
events:
max_entries: 8192
max_ttl: 5m
caches.events
events (mapping, optional): Controls the event cache.
The event cache is a key-value map of {event_id: event_data}. Each value may be up to 64KiB.
Generally you want quite a large event cache, and this is the first thing that will be hit during operations that
involve events (so, most of them), for example: state resolution, fetching events, client sync loops, message
pagination. Setting a lower value will free up memory, but will result in having to run to the database more often to
fetch events.
See caches for more details.
Example:
caches:
events:
max_entries: 8192
listeners
listeners (sequence mapping, required): Configures the addresses that Hammerhead will listen to.
Currently, only TCP socket listeners are supported - unix socket listeners are planned. You must specify at least one listener, but can have as many as you like.
Each listener specified has three keys: host, port, and tls:
host: The address to listen on. usually127.0.0.1and::1for localhost, or0.0.0.0and::for all addresses.port: The port to listen on. Must be between 1 and 65535 inclusive.tls(optional): Iftrue, this listener will use the TLS configuration.
Note
Native TLS is primarily only included for running the test suite. TLS should normally be terminated by your reverse proxy, unless you have an advanced use case.
Tip
All routes (client-to-server, appservices, key-server, server-to-server) are handled by all listeners. If you are used to the Synapse style of having to define which listeners handle which routes, you need not do that here.
127.0.0.1:8008and127.0.0.2:8448both run through the same router, for example.
Example:
listeners:
- host: 0.0.0.0
port: 8008
- host: "::"
port: 8008
- host: 0.0.0.0
port: 8448
tls: true
- host: "::"
port: 8448
tls: true
dont_expose_metrics
dont_expose_metrics (boolean, optional): If true, don’t expose /metrics.
Disables the prometheus metrics exporter route.
Example:
dont_expose_metrics: true
logging
logging (mapping, optional): The configuration for zerolog using zeroconfig.
See zeroconfig for the full schema.
If there are no loggers configured, a coloured “pretty” stdout logger will be configured for you. If you are in debug mode (or Hammerhead was built with a dirty working tree), an additional trace-level JSON logger will be configured for you too.
Example:
logging:
writers:
# - type: journald # uncomment if you're using journald.
- type: stdout
format: pretty-colored
min_level: info
- type: file
format: json
min_level: debug
filename: hammerhead.log # JSON-line file
max_size: 100
max_age: 30
max_backups: 3
compress: true
max_request_bytes
max_request_bytes (integer, optional): The maximum size of a request (in bytes) to read before aborting.
Defaults to 100MiB (104857600).
Since request bodies are buffered into memory (including media) before they’re worked on, it is generally necessary to limit the size of request bodies that will be read. If a request sends a body larger than this, it will only be partially read, and then rejected when the server realises it’s too large (at least one byte over the limit). As a result, this is typically used to limit the size of uploaded media, but that will likely become its own option in the media repository configuration later on.
Caution
Setting this value too high will either result in the OOM reaper coming knocking, and terminating the server process, or potentially undefined behaviour ranging from catchable alloc failures, to critical panics.
Examples:
max_request_bytes: 26214400 # 25MiB
max_request_bytes: 52428800 # 50MiB
max_request_bytes: 104857600 # 100MiB (higher values are typically excessive and unsafe)
max_request_bytes: 536870912 # 512MiB
max_request_bytes: 1073741824 # 1GiB
media_repo
media_repo (mapping, required): The configuration for the media repository.
Hammerhead’s media repository is quite a complex component that is incredibly flexible, so there are a number of configuration options to play with. Fear not, only a couple are necessary.
media_repo.root_path
root_path (string, required): The root path to where media should be stored.
This can either be a fully qualified absolute path, or a relative path, or even point at a symlink.
As long as the subdirectories local, remote, and external can be created at that location.
If the root_path does not exist, it will be created with 750 file permissions (rwxr-x---).
Example:
media_repo:
root_path: /mnt/media/hammerhead
media_repo.temp_path
temp_path (string, required): The path to where temporary media files should be stored.
This can either be a fully qualified absolute path, or a relative path, or even point at a symlink.
Temporary directories starting with the prefix hammerhead_media_ must be creatable by the server.
If the temp_path does not exist, it will be created with 750 file permissions (rwxr-x---).
“temporary media files” are typically files that are being actively uploaded (i.e. before they’re properly saved), and thumbnails that are being worked on. It is unlikely that files will remain in this directory for more than a few seconds at a time.
It is safe to put the temporary directory on an ephemeral file system.
Example:
media_repo:
temp_path: /mnt/media/hammerhead
media_repo.max_size_bytes
max_size_bytes (unsigned 64-bit integer, optional): The maximum size (in bytes) of a single media item. Defaults
to 50MiB (52428800).
Controls the maximum size of file uploads. Attempts to upload media files that are larger than this value will be
rejected, even if the uploader is an administrator.
The server will attempt to reject large uploads if their advertised Content-Length exceeds this value, but in cases
where the Content-Length header is unavailable, the server will read up to max_size_bytes+1 bytes to determine
whether the file is too large.
Important
The value of
max_size_bytesMUST be less than or equal tomax_request_bytes. Setting a value higher thanmax_request_byteswould cause the router component to reject the request for being too large before it could be passed to the media repository component for validation.The server will refuse to start if this condition is not met.
Examples:
media_repo:
max_size_bytes: 8388608 # 8MiB
media_repo:
max_size_bytes: 26214400 # 25MiB
media_repo:
max_size_bytes: 52428800 # 50MiB
media_repo:
max_size_bytes: 104857600 # 100MiB
media_repo:
max_size_bytes: 536870912 # 512MiB
media_repo:
max_size_bytes: 1073741824 # 1GiB
media_repo.security
security (mapping, optional): Configures the security-related settings for the media repository.
Because the media repository is a complex component that exclusively handles potentially untrusted user input, there are several security configurations available. The configuration for this is structured in such a way that the default values are typically sufficient for most people.
media_repo.security.disable_remote_media
disable_remote_media (boolean, optional): If enabled, disables external/remote media functionality.
Defaults to false.
When remote media is disabled, federated media will not be fetched, federated requests for media will be rejected, and server-side URL previews will be disabled.
Example:
media_repo:
security:
disable_remote_media: true
media_repo.security.disallow_mime_types
disallow_mime_types (sequence of strings, optional): Prevents files matching any of the given glob patterns from being
uploaded to the media repository.
When a user attempts to upload a file, if the claimed Content-Type matches any of the given glob patterns, it will be
rejected, even if they are an administrator.
Warning
The content type of encrypted files is
application/octet-stream. Using a glob pattern that blocks this will effectively prevent users from uploading encrypted files.Furthermore, Hammerhead does not currently support MIME sniffing, so malicious users can work around this restriction by lying about or omitting the relevant header.
Example:
media_repo:
security:
disallow_mime_types:
- image/* # ban all images
- application/vnd.microsoft.portable-executable # ban EXE files
- application/zip
- application/x-zip-compressed
- application/gzip
- application/zstd # ban compressed types
media_repo.security.only_admins
only_admins (boolean, optional): If true, only server administrators can upload media.
When enabled, regular users are unable to upload media. This can be used to restrict media uploads to only trusted users.
Example:
media_repo:
security:
only_admins: true
media_repo.security.disable_checksums
disable_checksums (boolean, optional): If true, disable checksum generation and comparison.
Hammerhead makes use of SHA256 checksums to verify the integrity of media when utilising it. Typically, this means a SHA256 checksum is generated when the file has finished uploaded, and is then verified before it is transmitted to requesting clients. This prevents the file being tampered with on disk (although this is easily circumvented by just modifying the hash in the database). Over federation, Hammerhead will include this SHA256 hash in the metadata part of the download, before sending the media content itself. This means other Hammerhead servers that download media from this server will be able to verify the integrity of the downloaded file before processing it. This is not a Matrix behaviour and is currently exclusive to Hammerhead.
Turning off checksumming may improve performance as it avoids an extra disk roundtrip, however this opens up the potential for corrupted or tampered files to be served.
Files that fail checksum validation are not immediately deleted, however will cause an error.
Example:
media_repo:
security:
disable_checksums: true # not recommended
media_repo.security.minimum_account_age
minimum_account_age (string (duration), optional): The minimum age an account must be before it can upload files.
Restricts uploading media to accounts that have existed for longer than the given duration, excluding administrators.
Example:
media_repo:
security:
minimum_account_age: 5m # 5 minutes
media_repo.security.disable_server_side_thumbnails
disable_server_side_thumbnails (boolean, optional): Disables server-side thumbnail generation.
Disabling server-side thumbnails may be desirable to reduce the amount of processing done on user-generated content, which is a large attack surface. The downside of this is that thumbnails will generally be unavailable for uploaded media, such as user avatars, resulting in increased bandwidth and unhappy impatient users.
Example:
media_repo:
security:
disallow_server_side_thumbnails: true
media_repo.security.acl
acl (mapping, optional): An ACL event body that defines which servers are and aren’t allowed to
communicate media.
Sets an access-control-list in the same way as room ACLs - servers in the allow are always allowed, unless they are
denied in the deny list. You cannot create an ACL that bans the local server.
The ACL is bidirectional - forbidden servers won’t be able to download media from you, but you also won’t be able to download media from them.
Examples:
# Explicit denylist
media_repo:
security:
acl:
allow: ["*"] # Allow all servers
deny:
- evil.matrix.example # Don't allow media communication with evil.matrix.example specifically.
- "*.bad.matrix.example" # Don't allow media communication with any server name under "bad.matrix.example".
# Explicit denylist
media_repo:
security:
acl:
allow:
- "SERVER_NAME_HERE" # Your server name has to be explicitly listed
- "good.matrix.example" # Allow media communication with good.matrix.example
# No need for an explicit `deny` here.
media_repo.security.enable_streaming
enable_streaming (boolean, optional): Allow streaming media over federation.
When enabled, media can be streamed directly to the requesting client before the server has finished downloading it over federation. This is incompatible with checksum verification, which will instead be ignored in this case. The benefit of this feature is that users can start streaming files from remote servers almost immediately, rather than having to wait for the homeserver to finish downloading it before uploading it again, which is particularly useful for videos. However, the lack of checksum verification, or preprocessing as a whole, means that other security protections may not be effective, and potentially invalid or illegal data may be sent to the client unknowingly.
You should evaluate how this fits into your threat model before changing this value.
Example:
media_repo:
security:
enable_streaming: true
old_verify_keys
old_verify_keys (mapping, optional): A mapping of previous signing key IDs to when they expired.
A map of old signature keys that can be used to verify events. You should prefer to import the keys via the command
hammerhead -import-signing-key, but if you do not have the private key anymore you can advertise it here instead.
The keys of the map are the key IDs (e.g. ed25519:foo), and the value has two required keys:
key: The full public signing key of this key.- expired_ts: The unix timestamp (milliseconds) when this key expired
Example:
old_verify_keys:
- ed25519:auto:
key: Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw
expired_ts: 1576767829750
default_room_version
default_room_version (string, optional): Defines the default room version for new rooms.
While clients can specify the room version they want to create when calling /createRoom, if they do not, the
room version specified here will be used instead.
Note
You probably don’t need to set this - the latest version that Hammerhead fully supports is used by default, meaning generally you should just update your server if the default version is not new enough. Overriding the default value may have unintended consequences.
Example:
default_room_version: 12
registration
registration (mapping, optional): The registration settings for this server.
Controls the registration requirements for the server. If omitted, registration is disabled.
Example:
registration:
enabled: true
token: ed03d0b58fa20618be08fe20c1eb05a0
password_requirements:
min_entropy: 70
registration.enabled
enabled (boolean, optional): Whether registration is enabled at all. Defaults to false.
If registration is disabled, no new accounts can be created without the admin API, even if requirements like a token are set.
Example:
registration:
enabled: true
registration.i_have_a_very_good_reason_or_i_am_stupid_and_want_to_allow_unsafe_open_registration
i_have_a_very_good_reason_or_i_am_stupid_and_want_to_allow_unsafe_open_registration (boolean, optional):
If set to true, registration will be unsafely open.
Enables registration without any registration requirements. This is dangerous, as this means your only defense against automated bots mass-registering on your server is ratelimits, and you have no way to prevent untrusted users registering and potentially being abusive. You should never need to enable this!
Example:
registration:
i_have_a_very_good_reason_or_i_am_stupid_and_want_to_allow_unsafe_open_registration: false
registration.token
token (string, optional): The pre-shared secret to challenge registrations with.
A pre-shared token that is required as a second step to register an account. This allows you to give out an “invite code” to people you trust, so that they can create accounts themselves. This is the recommended way to have registration enabled.
If omitted or empty, the registration token will be disabled.
Example:
registration:
token: ed03d0b58fa20618be08fe20c1eb05a0
registration.password_requirements
password_requirements (mapping, optional): Controls the requirements for passwords on this server.
Allows you to set minimum password length and entropy requirements. Not applied to accounts created via the admin API.
Example:
registration:
password_requirements:
min_entropy: 50 # require at least 50 bits of entropy
min_length: 0 # disable length requirements
room_directory.admin_only
admin_only (boolean, optional): If enabled, only server admins are able to publish to the room directory.
When enabled, only admin users can publish rooms to the public room directory (i.e. room list). Note that when enabled, users can still create and use aliases, they just cannot publish them.
Example:
room_directory:
admin_only: true
server_name
server_name (string, required): The name of this server.
This is not necessarily your domain name - it is the part of the ID that appears at the end of user IDs, and room
aliases. For example, @user:matrix.example would have the server name matrix.example, even if traffic was
ultimately served from hammerhead.matrix.example.
The server name can be an IP address (not recommended) or DNS name, optionally with a port. Typically, you will use a DNS name without a port here (you configure the port later).
Caution
You cannot change the server name after registering the first user.
Examples:
server_name: matrix.example
server_name: matrix.example:8448 # not recommended
tls
tls (mapping, optional): Configures TLS options for TLS listeners.
Configures the certificate file and key file for serving TLS directly from listeners. You can generate a self-signed certificate with
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out cert.pem
Example:
tls:
cert_file: path/to/cert.pem
key_file: path/to/key.pem
well_known
well_known (mapping, optional): Controls the values returned to the well-known helper routes.
well_known.client
client (string, optional): The base URL for client-to-server interactions.
Example:
well_known:
client: https://client.matrix.example