summaryrefslogtreecommitdiffstats
path: root/nixos/modules/services/databases
diff options
context:
space:
mode:
authorpennae <github@quasiparticle.net>2023-01-03 00:24:36 +0100
committerpennae <github@quasiparticle.net>2023-01-10 10:31:55 +0100
commit5b012f2c5563494f8bd0277feb9be8c3dc6cb1ce (patch)
treef5ba21e030ddc51f3f9df0ce6aab29f511574fd2 /nixos/modules/services/databases
parent1ce4fde27b62b878d60cd3e9baad5ae5b0042a45 (diff)
nixos/foundationdb: convert manual chapter to MD
Diffstat (limited to 'nixos/modules/services/databases')
-rw-r--r--nixos/modules/services/databases/foundationdb.md309
-rw-r--r--nixos/modules/services/databases/foundationdb.nix2
-rw-r--r--nixos/modules/services/databases/foundationdb.xml732
3 files changed, 667 insertions, 376 deletions
diff --git a/nixos/modules/services/databases/foundationdb.md b/nixos/modules/services/databases/foundationdb.md
new file mode 100644
index 000000000000..f852c6888d84
--- /dev/null
+++ b/nixos/modules/services/databases/foundationdb.md
@@ -0,0 +1,309 @@
+# FoundationDB {#module-services-foundationdb}
+
+*Source:* {file}`modules/services/databases/foundationdb.nix`
+
+*Upstream documentation:* <https://apple.github.io/foundationdb/>
+
+*Maintainer:* Austin Seipp
+
+*Available version(s):* 5.1.x, 5.2.x, 6.0.x
+
+FoundationDB (or "FDB") is an open source, distributed, transactional
+key-value store.
+
+## Configuring and basic setup {#module-services-foundationdb-configuring}
+
+To enable FoundationDB, add the following to your
+{file}`configuration.nix`:
+```
+services.foundationdb.enable = true;
+services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
+```
+
+The {option}`services.foundationdb.package` option is required, and
+must always be specified. Due to the fact FoundationDB network protocols and
+on-disk storage formats may change between (major) versions, and upgrades
+must be explicitly handled by the user, you must always manually specify
+this yourself so that the NixOS module will use the proper version. Note
+that minor, bugfix releases are always compatible.
+
+After running {command}`nixos-rebuild`, you can verify whether
+FoundationDB is running by executing {command}`fdbcli` (which is
+added to {option}`environment.systemPackages`):
+```ShellSession
+$ sudo -u foundationdb fdbcli
+Using cluster file `/etc/foundationdb/fdb.cluster'.
+
+The database is available.
+
+Welcome to the fdbcli. For help, type `help'.
+fdb> status
+
+Using cluster file `/etc/foundationdb/fdb.cluster'.
+
+Configuration:
+ Redundancy mode - single
+ Storage engine - memory
+ Coordinators - 1
+
+Cluster:
+ FoundationDB processes - 1
+ Machines - 1
+ Memory availability - 5.4 GB per process on machine with least available
+ Fault Tolerance - 0 machines
+ Server time - 04/20/18 15:21:14
+
+...
+
+fdb>
+```
+
+You can also write programs using the available client libraries. For
+example, the following Python program can be run in order to grab the
+cluster status, as a quick example. (This example uses
+{command}`nix-shell` shebang support to automatically supply the
+necessary Python modules).
+```ShellSession
+a@link> cat fdb-status.py
+#! /usr/bin/env nix-shell
+#! nix-shell -i python -p python pythonPackages.foundationdb52
+
+import fdb
+import json
+
+def main():
+ fdb.api_version(520)
+ db = fdb.open()
+
+ @fdb.transactional
+ def get_status(tr):
+ return str(tr['\xff\xff/status/json'])
+
+ obj = json.loads(get_status(db))
+ print('FoundationDB available: %s' % obj['client']['database_status']['available'])
+
+if __name__ == "__main__":
+ main()
+a@link> chmod +x fdb-status.py
+a@link> ./fdb-status.py
+FoundationDB available: True
+a@link>
+```
+
+FoundationDB is run under the {command}`foundationdb` user and group
+by default, but this may be changed in the NixOS configuration. The systemd
+unit {command}`foundationdb.service` controls the
+{command}`fdbmonitor` process.
+
+By default, the NixOS module for FoundationDB creates a single SSD-storage
+based database for development and basic usage. This storage engine is
+designed for SSDs and will perform poorly on HDDs; however it can handle far
+more data than the alternative "memory" engine and is a better default
+choice for most deployments. (Note that you can change the storage backend
+on-the-fly for a given FoundationDB cluster using
+{command}`fdbcli`.)
+
+Furthermore, only 1 server process and 1 backup agent are started in the
+default configuration. See below for more on scaling to increase this.
+
+FoundationDB stores all data for all server processes under
+{file}`/var/lib/foundationdb`. You can override this using
+{option}`services.foundationdb.dataDir`, e.g.
+```
+services.foundationdb.dataDir = "/data/fdb";
+```
+
+Similarly, logs are stored under {file}`/var/log/foundationdb`
+by default, and there is a corresponding
+{option}`services.foundationdb.logDir` as well.
+
+## Scaling processes and backup agents {#module-services-foundationdb-scaling}
+
+Scaling the number of server processes is quite easy; simply specify
+{option}`services.foundationdb.serverProcesses` to be the number of
+FoundationDB worker processes that should be started on the machine.
+
+FoundationDB worker processes typically require 4GB of RAM per-process at
+minimum for good performance, so this option is set to 1 by default since
+the maximum amount of RAM is unknown. You're advised to abide by this
+restriction, so pick a number of processes so that each has 4GB or more.
+
+A similar option exists in order to scale backup agent processes,
+{option}`services.foundationdb.backupProcesses`. Backup agents are
+not as performance/RAM sensitive, so feel free to experiment with the number
+of available backup processes.
+
+## Clustering {#module-services-foundationdb-clustering}
+
+FoundationDB on NixOS works similarly to other Linux systems, so this
+section will be brief. Please refer to the full FoundationDB documentation
+for more on clustering.
+
+FoundationDB organizes clusters using a set of
+*coordinators*, which are just specially-designated
+worker processes. By default, every installation of FoundationDB on NixOS
+will start as its own individual cluster, with a single coordinator: the
+first worker process on {command}`localhost`.
+
+Coordinators are specified globally using the
+{command}`/etc/foundationdb/fdb.cluster` file, which all servers and
+client applications will use to find and join coordinators. Note that this
+file *can not* be managed by NixOS so easily:
+FoundationDB is designed so that it will rewrite the file at runtime for all
+clients and nodes when cluster coordinators change, with clients
+transparently handling this without intervention. It is fundamentally a
+mutable file, and you should not try to manage it in any way in NixOS.
+
+When dealing with a cluster, there are two main things you want to do:
+
+ - Add a node to the cluster for storage/compute.
+ - Promote an ordinary worker to a coordinator.
+
+A node must already be a member of the cluster in order to properly be
+promoted to a coordinator, so you must always add it first if you wish to
+promote it.
+
+To add a machine to a FoundationDB cluster:
+
+ - Choose one of the servers to start as the initial coordinator.
+ - Copy the {command}`/etc/foundationdb/fdb.cluster` file from this
+ server to all the other servers. Restart FoundationDB on all of these
+ other servers, so they join the cluster.
+ - All of these servers are now connected and working together in the
+ cluster, under the chosen coordinator.
+
+At this point, you can add as many nodes as you want by just repeating the
+above steps. By default there will still be a single coordinator: you can
+use {command}`fdbcli` to change this and add new coordinators.
+
+As a convenience, FoundationDB can automatically assign coordinators based
+on the redundancy mode you wish to achieve for the cluster. Once all the
+nodes have been joined, simply set the replication policy, and then issue
+the {command}`coordinators auto` command
+
+For example, assuming we have 3 nodes available, we can enable double
+redundancy mode, then auto-select coordinators. For double redundancy, 3
+coordinators is ideal: therefore FoundationDB will make
+*every* node a coordinator automatically:
+
+```ShellSession
+fdbcli> configure double ssd
+fdbcli> coordinators auto
+```
+
+This will transparently update all the servers within seconds, and
+appropriately rewrite the {command}`fdb.cluster` file, as well as
+informing all client processes to do the same.
+
+## Client connectivity {#module-services-foundationdb-connectivity}
+
+By default, all clients must use the current {command}`fdb.cluster`
+file to access a given FoundationDB cluster. This file is located by default
+in {command}`/etc/foundationdb/fdb.cluster` on all machines with the
+FoundationDB service enabled, so you may copy the active one from your
+cluster to a new node in order to connect, if it is not part of the cluster.
+
+## Client authorization and TLS {#module-services-foundationdb-authorization}
+
+By default, any user who can connect to a FoundationDB process with the
+correct cluster configuration can access anything. FoundationDB uses a
+pluggable design to transport security, and out of the box it supports a
+LibreSSL-based plugin for TLS support. This plugin not only does in-flight
+encryption, but also performs client authorization based on the given
+endpoint's certificate chain. For example, a FoundationDB server may be
+configured to only accept client connections over TLS, where the client TLS
+certificate is from organization *Acme Co* in the
+*Research and Development* unit.
+
+Configuring TLS with FoundationDB is done using the
+{option}`services.foundationdb.tls` options in order to control the
+peer verification string, as well as the certificate and its private key.
+
+Note that the certificate and its private key must be accessible to the
+FoundationDB user account that the server runs under. These files are also
+NOT managed by NixOS, as putting them into the store may reveal private
+information.
+
+After you have a key and certificate file in place, it is not enough to
+simply set the NixOS module options -- you must also configure the
+{command}`fdb.cluster` file to specify that a given set of
+coordinators use TLS. This is as simple as adding the suffix
+{command}`:tls` to your cluster coordinator configuration, after the
+port number. For example, assuming you have a coordinator on localhost with
+the default configuration, simply specifying:
+
+```
+XXXXXX:XXXXXX@127.0.0.1:4500:tls
+```
+
+will configure all clients and server processes to use TLS from now on.
+
+## Backups and Disaster Recovery {#module-services-foundationdb-disaster-recovery}
+
+The usual rules for doing FoundationDB backups apply on NixOS as written in
+the FoundationDB manual. However, one important difference is the security
+profile for NixOS: by default, the {command}`foundationdb` systemd
+unit uses *Linux namespaces* to restrict write access to
+the system, except for the log directory, data directory, and the
+{command}`/etc/foundationdb/` directory. This is enforced by default
+and cannot be disabled.
+
+However, a side effect of this is that the {command}`fdbbackup`
+command doesn't work properly for local filesystem backups: FoundationDB
+uses a server process alongside the database processes to perform backups
+and copy the backups to the filesystem. As a result, this process is put
+under the restricted namespaces above: the backup process can only write to
+a limited number of paths.
+
+In order to allow flexible backup locations on local disks, the FoundationDB
+NixOS module supports a
+{option}`services.foundationdb.extraReadWritePaths` option. This
+option takes a list of paths, and adds them to the systemd unit, allowing
+the processes inside the service to write (and read) the specified
+directories.
+
+For example, to create backups in {command}`/opt/fdb-backups`, first
+set up the paths in the module options:
+
+```
+services.foundationdb.extraReadWritePaths = [ "/opt/fdb-backups" ];
+```
+
+Restart the FoundationDB service, and it will now be able to write to this
+directory (even if it does not yet exist.) Note: this path
+*must* exist before restarting the unit. Otherwise,
+systemd will not include it in the private FoundationDB namespace (and it
+will not add it dynamically at runtime).
+
+You can now perform a backup:
+
+```ShellSession
+$ sudo -u foundationdb fdbbackup start -t default -d file:///opt/fdb-backups
+$ sudo -u foundationdb fdbbackup status -t default
+```
+
+## Known limitations {#module-services-foundationdb-limitations}
+
+The FoundationDB setup for NixOS should currently be considered beta.
+FoundationDB is not new software, but the NixOS compilation and integration
+has only undergone fairly basic testing of all the available functionality.
+
+ - There is no way to specify individual parameters for individual
+ {command}`fdbserver` processes. Currently, all server processes
+ inherit all the global {command}`fdbmonitor` settings.
+ - Ruby bindings are not currently installed.
+ - Go bindings are not currently installed.
+
+## Options {#module-services-foundationdb-options}
+
+NixOS's FoundationDB module allows you to configure all of the most relevant
+configuration options for {command}`fdbmonitor`, matching it quite
+closely. A complete list of options for the FoundationDB module may be found
+[here](#opt-services.foundationdb.enable). You should
+also read the FoundationDB documentation as well.
+
+## Full documentation {#module-services-foundationdb-full-docs}
+
+FoundationDB is a complex piece of software, and requires careful
+administration to properly use. Full documentation for administration can be
+found here: <https://apple.github.io/foundationdb/>.
diff --git a/nixos/modules/services/databases/foundationdb.nix b/nixos/modules/services/databases/foundationdb.nix
index 16d539b661eb..fdfe5a28f31a 100644
--- a/nixos/modules/services/databases/foundationdb.nix
+++ b/nixos/modules/services/databases/foundationdb.nix
@@ -424,6 +424,8 @@ in
};
};
+ # Don't edit the docbook xml directly, edit the md and generate it:
+ # `pandoc foundationdb.md -t docbook --top-level-division=chapter --extract-media=media -f markdown-smart --lua-filter ../../../../doc/build-aux/pandoc-filters/myst-reader/roles.lua --lua-filter ../../../../doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua > foundationdb.xml`
meta.doc = ./foundationdb.xml;
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
}
diff --git a/nixos/modules/services/databases/foundationdb.xml b/nixos/modules/services/databases/foundationdb.xml
index b0b1ebeab45f..ae7a6dae955e 100644
--- a/nixos/modules/services/databases/foundationdb.xml
+++ b/nixos/modules/services/databases/foundationdb.xml
@@ -1,60 +1,56 @@
-<chapter xmlns="http://docbook.org/ns/docbook"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- version="5.0"
- xml:id="module-services-foundationdb">
- <title>FoundationDB</title>
- <para>
- <emphasis>Source:</emphasis>
- <filename>modules/services/databases/foundationdb.nix</filename>
- </para>
- <para>
- <emphasis>Upstream documentation:</emphasis>
- <link xlink:href="https://apple.github.io/foundationdb/"/>
- </para>
- <para>
- <emphasis>Maintainer:</emphasis> Austin Seipp
- </para>
- <para>
- <emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x
- </para>
- <para>
- FoundationDB (or "FDB") is an open source, distributed, transactional
- key-value store.
- </para>
- <section xml:id="module-services-foundationdb-configuring">
- <title>Configuring and basic setup</title>
-
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-foundationdb">
+ <title>FoundationDB</title>
<para>
- To enable FoundationDB, add the following to your
- <filename>configuration.nix</filename>:
-<programlisting>
-services.foundationdb.enable = true;
-services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
-</programlisting>
+ <emphasis>Source:</emphasis>
+ <filename>modules/services/databases/foundationdb.nix</filename>
</para>
-
<para>
- The <option>services.foundationdb.package</option> option is required, and
- must always be specified. Due to the fact FoundationDB network protocols and
- on-disk storage formats may change between (major) versions, and upgrades
- must be explicitly handled by the user, you must always manually specify
- this yourself so that the NixOS module will use the proper version. Note
- that minor, bugfix releases are always compatible.
+ <emphasis>Upstream documentation:</emphasis>
+ <link xlink:href="https://apple.github.io/foundationdb/" role="uri">https://apple.github.io/foundationdb/</link>
</para>
-
<para>
- After running <command>nixos-rebuild</command>, you can verify whether
- FoundationDB is running by executing <command>fdbcli</command> (which is
- added to <option>environment.systemPackages</option>):
-<screen>
-<prompt>$ </prompt>sudo -u foundationdb fdbcli
+ <emphasis>Maintainer:</emphasis> Austin Seipp
+ </para>
+ <para>
+ <emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x
+ </para>
+ <para>
+ FoundationDB (or &quot;FDB&quot;) is an open source, distributed,
+ transactional key-value store.
+ </para>
+ <section xml:id="module-services-foundationdb-configuring">
+ <title>Configuring and basic setup</title>
+ <para>
+ To enable FoundationDB, add the following to your
+ <filename>configuration.nix</filename>:
+ </para>
+ <programlisting>
+services.foundationdb.enable = true;
+services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
+</programlisting>
+ <para>
+ The <option>services.foundationdb.package</option> option is
+ required, and must always be specified. Due to the fact
+ FoundationDB network protocols and on-disk storage formats may
+ change between (major) versions, and upgrades must be explicitly
+ handled by the user, you must always manually specify this
+ yourself so that the NixOS module will use the proper version.
+ Note that minor, bugfix releases are always compatible.
+ </para>
+ <para>
+ After running <command>nixos-rebuild</command>, you can verify
+ whether FoundationDB is running by executing
+ <command>fdbcli</command> (which is added to
+ <option>environment.systemPackages</option>):
+ </para>
+ <programlisting>
+$ sudo -u foundationdb fdbcli
Using cluster file `/etc/foundationdb/fdb.cluster'.
The database is available.
Welcome to the fdbcli. For help, type `help'.
-<prompt>fdb> </prompt>status
+fdb&gt; status
Using cluster file `/etc/foundationdb/fdb.cluster'.
@@ -72,18 +68,17 @@ Cluster:
...
-<prompt>fdb></prompt>
-</screen>
- </para>
-
- <para>
- You can also write programs using the available client libraries. For
- example, the following Python program can be run in order to grab the
- cluster status, as a quick example. (This example uses
- <command>nix-shell</command> shebang support to automatically supply the
- necessary Python modules).
-<screen>
-<prompt>a@link> </prompt>cat fdb-status.py
+fdb&gt;
+</programlisting>
+ <para>
+ You can also write programs using the available client libraries.
+ For example, the following Python program can be run in order to
+ grab the cluster status, as a quick example. (This example uses
+ <command>nix-shell</command> shebang support to automatically
+ supply the necessary Python modules).
+ </para>
+ <programlisting>
+a@link&gt; cat fdb-status.py
#! /usr/bin/env nix-shell
#! nix-shell -i python -p python pythonPackages.foundationdb52
@@ -101,343 +96,328 @@ def main():
obj = json.loads(get_status(db))
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
-if __name__ == "__main__":
+if __name__ == &quot;__main__&quot;:
main()
-<prompt>a@link> </prompt>chmod +x fdb-status.py
-<prompt>a@link> </prompt>./fdb-status.py
+a@link&gt; chmod +x fdb-status.py
+a@link&gt; ./fdb-status.py
FoundationDB available: True
-<prompt>a@link></prompt>
-</screen>
- </para>
-
- <para>
- FoundationDB is run under the <command>foundationdb</command> user and group
- by default, but this may be changed in the NixOS configuration. The systemd
- unit <command>foundationdb.service</command> controls the
- <command>fdbmonitor</command> process.
- </para>
-
- <para>
- By default, the NixOS module for FoundationDB creates a single SSD-storage
- based database for development and basic usage. This storage engine is
- designed for SSDs and will perform poorly on HDDs; however it can handle far
- more data than the alternative "memory" engine and is a better default
- choice for most deployments. (Note that you can change the storage backend
- on-the-fly for a given FoundationDB cluster using
- <command>fdbcli</command>.)
- </para>
-
- <para>
- Furthermore, only 1 server process and 1 backup agent are started in the
- default configuration. See below for more on scaling to increase this.
- </para>
-
- <para>
- FoundationDB stores all data for all server processes under
- <filename>/var/lib/foundationdb</filename>. You can override this using
- <option>services.foundationdb.dataDir</option>, e.g.
-<programlisting>
-services.foundationdb.dataDir = "/data/fdb";
+a@link&gt;
</programlisting>
- </para>
-
- <para>
- Similarly, logs are stored under <filename>/var/log/foundationdb</filename>
- by default, and there is a corresponding
- <option>services.foundationdb.logDir</option> as well.
- </para>
- </section>
- <section xml:id="module-services-foundationdb-scaling">
- <title>Scaling processes and backup agents</title>
-
- <para>
- Scaling the number of server processes is quite easy; simply specify
- <option>services.foundationdb.serverProcesses</option> to be the number of
- FoundationDB worker processes that should be started on the machine.
- </para>
-
- <para>
- FoundationDB worker processes typically require 4GB of RAM per-process at
- minimum for good performance, so this option is set to 1 by default since
- the maximum amount of RAM is unknown. You're advised to abide by this
- restriction, so pick a number of processes so that each has 4GB or more.
- </para>
-
- <para>
- A similar option exists in order to scale backup agent processes,
- <option>services.foundationdb.backupProcesses</option>. Backup agents are
- not as performance/RAM sensitive, so feel free to experiment with the number
- of available backup processes.
- </para>
- </section>
- <section xml:id="module-services-foundationdb-clustering">
- <title>Clustering</title>
-
- <para>
- FoundationDB on NixOS works similarly to other Linux systems, so this
- section will be brief. Please refer to the full FoundationDB documentation
- for more on clustering.
- </para>
-
- <para>
- FoundationDB organizes clusters using a set of
- <emphasis>coordinators</emphasis>, which are just specially-designated
- worker processes. By default, every installation of FoundationDB on NixOS
- will start as its own individual cluster, with a single coordinator: the
- first worker process on <command>localhost</command>.
- </para>
-
- <para>
- Coordinators are specified globally using the
- <command>/etc/foundationdb/fdb.cluster</command> file, which all servers and
- client applications will use to find and join coordinators. Note that this
- file <emphasis>can not</emphasis> be managed by NixOS so easily:
- FoundationDB is designed so that it will rewrite the file at runtime for all
- clients and nodes when cluster coordinators change, with clients
- transparently handling this without intervention. It is fundamentally a
- mutable file, and you should not try to manage it in any way in NixOS.
- </para>
-
- <para>
- When dealing with a cluster, there are two main things you want to do:
- </para>
-
- <itemizedlist>
- <listitem>
<para>
- Add a node to the cluster for storage/compute.
+ FoundationDB is run under the <command>foundationdb</command> user
+ and group by default, but this may be changed in the NixOS
+ configuration. The systemd unit
+ <command>foundationdb.service</command> controls the
+ <command>fdbmonitor</command> process.
</para>
- </listitem>
- <listitem>
<para>
- Promote an ordinary worker to a coordinator.
+ By default, the NixOS module for FoundationDB creates a single
+ SSD-storage based database for development and basic usage. This
+ storage engine is designed for SSDs and will perform poorly on
+ HDDs; however it can handle far more data than the alternative
+ &quot;memory&quot; engine and is a better default choice for most
+ deployments. (Note that you can change the storage backend
+ on-the-fly for a given FoundationDB cluster using
+ <command>fdbcli</command>.)
</para>
- </listitem>
- </itemizedlist>
-
- <para>
- A node must already be a member of the cluster in order to properly be
- promoted to a coordinator, so you must always add it first if you wish to
- promote it.
- </para>
-
- <para>
- To add a machine to a FoundationDB cluster:
- </para>
-
- <itemizedlist>
- <listitem>
<para>
- Choose one of the servers to start as the initial coordinator.
+ Furthermore, only 1 server process and 1 backup agent are started
+ in the default configuration. See below for more on scaling to
+ increase this.
</para>
- </listitem>
- <listitem>
<para>
- Copy the <command>/etc/foundationdb/fdb.cluster</command> file from this
- server to all the other servers. Restart FoundationDB on all of these
- other servers, so they join the cluster.
+ FoundationDB stores all data for all server processes under
+ <filename>/var/lib/foundationdb</filename>. You can override this
+ using <option>services.foundationdb.dataDir</option>, e.g.
</para>
- </listitem>
- <listitem>
+ <programlisting>
+services.foundationdb.dataDir = &quot;/data/fdb&quot;;
+</programlisting>
<para>
- All of these servers are now connected and working together in the
- cluster, under the chosen coordinator.
+ Similarly, logs are stored under
+ <filename>/var/log/foundationdb</filename> by default, and there
+ is a corresponding <option>services.foundationdb.logDir</option>
+ as well.
</para>
- </listitem>
- </itemizedlist>
-
- <para>
- At this point, you can add as many nodes as you want by just repeating the
- above steps. By default there will still be a single coordinator: you can
- use <command>fdbcli</command> to change this and add new coordinators.
- </para>
-
- <para>
- As a convenience, FoundationDB can automatically assign coordinators based
- on the redundancy mode you wish to achieve for the cluster. Once all the
- nodes have been joined, simply set the replication policy, and then issue
- the <command>coordinators auto</command> command
- </para>
-
- <para>
- For example, assuming we have 3 nodes available, we can enable double
- redundancy mode, then auto-select coordinators. For double redundancy, 3
- coordinators is ideal: therefore FoundationDB will make
- <emphasis>every</emphasis> node a coordinator automatically:
- </para>
-
-<screen>
-<prompt>fdbcli> </prompt>configure double ssd
-<prompt>fdbcli> </prompt>coordinators auto
-</screen>
-
- <para>
- This will transparently update all the servers within seconds, and
- appropriately rewrite the <command>fdb.cluster</command> file, as well as
- informing all client processes to do the same.
- </para>
- </section>
- <section xml:id="module-services-foundationdb-connectivity">
- <title>Client connectivity</title>
-
- <para>
- By default, all clients must use the current <command>fdb.cluster</command>
- file to access a given FoundationDB cluster. This file is located by default
- in <command>/etc/foundationdb/fdb.cluster</command> on all machines with the
- FoundationDB service enabled, so you may copy the active one from your
- cluster to a new node in order to connect, if it is not part of the cluster.
- </para>
- </section>
- <section xml:id="module-services-foundationdb-authorization">
- <title>Client authorization and TLS</title>
-
- <para>
- By default, any user who can connect to a FoundationDB process with the
- correct cluster configuration can access anything. FoundationDB uses a
- pluggable design to transport security, and out of the box it supports a
- LibreSSL-based plugin for TLS support. This plugin not only does in-flight
- encryption, but also performs client authorization based on the given
- endpoint's certificate chain. For example, a FoundationDB server may be
- configured to only accept client connections over TLS, where the client TLS
- certificate is from organization <emphasis>Acme Co</emphasis> in the
- <emphasis>Research and Development</emphasis> unit.
- </para>
-
- <para>
- Configuring TLS with FoundationDB is done using the
- <option>services.foundationdb.tls</option> options in order to control the
- peer verification string, as well as the certificate and its private key.
- </para>
-
- <para>
- Note that the certificate and its private key must be accessible to the
- FoundationDB user account that the server runs under. These files are also
- NOT managed by NixOS, as putting them into the store may reveal private
- information.
- </para>
-
- <para>
- After you have a key and certificate file in place, it is not enough to
- simply set the NixOS module options -- you must also configure the
- <command>fdb.cluster</command> file to specify that a given set of
- coordinators use TLS. This is as simple as adding the suffix
- <command>:tls</command> to your cluster coordinator configuration, after the
- port number. For example, assuming you have a coordinator on localhost with
- the default configuration, simply specifying:
- </para>
-
-<programlisting>
+ </section>
+ <section xml:id="module-services-foundationdb-scaling">
+ <title>Scaling processes and backup agents</title>
+ <para>
+ Scaling the number of server processes is quite easy; simply
+ specify <option>services.foundationdb.serverProcesses</option> to
+ be the number of FoundationDB worker processes that should be
+ started on the machine.
+ </para>
+ <para>
+ FoundationDB worker processes typically require 4GB of RAM
+ per-process at minimum for good performance, so this option is set
+ to 1 by default since the maximum amount of RAM is unknown. You're
+ advised to abide by this restriction, so pick a number of
+ processes so that each has 4GB or more.
+ </para>
+ <para>
+ A similar option exists in order to scale backup agent processes,
+ <option>services.foundationdb.backupProcesses</option>. Backup
+ agents are not as performance/RAM sensitive, so feel free to
+ experiment with the number of available backup processes.
+ </para>
+ </section>
+ <section xml:id="module-services-foundationdb-clustering">
+ <title>Clustering</title>
+ <para>
+ FoundationDB on NixOS works similarly to other Linux systems, so
+ this section will be brief. Please refer to the full FoundationDB
+ documentation for more on clustering.
+ </para>
+ <para>
+ FoundationDB organizes clusters using a set of
+ <emphasis>coordinators</emphasis>, which are just
+ specially-designated worker processes. By default, every
+ installation of FoundationDB on NixOS will start as its own
+ individual cluster, with a single coordinator: the first worker
+ process on <command>localhost</command>.
+ </para>
+ <para>
+ Coordinators are specified globally using the
+ <command>/etc/foundationdb/fdb.cluster</command> file, which all
+ servers and client applications will use to find and join
+ coordinators. Note that this file <emphasis>can not</emphasis> be
+ managed by NixOS so easily: FoundationDB is designed so that it
+ will rewrite the file at runtime for all clients and nodes when
+ cluster coordinators change, with clients transparently handling
+ this without intervention. It is fundamentally a mutable file, and
+ you should not try to manage it in any way in NixOS.
+ </para>
+ <para>
+ When dealing with a cluster, there are two main things you want to
+ do:
+ </para>
+ <itemizedlist spacing="compact">
+ <listitem>
+ <para>
+ Add a node to the cluster for storage/compute.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Promote an ordinary worker to a coordinator.
+ </para>
+ </listitem>
+ </itemizedlist>
+ <para>
+ A node must already be a member of the cluster in order to
+ properly be promoted to a coordinator, so you must always add it
+ first if you wish to promote it.
+ </para>
+ <para>
+ To add a machine to a FoundationDB cluster:
+ </para>
+ <itemizedlist spacing="compact">
+ <listitem>
+ <para>
+ Choose one of the servers to start as the initial coordinator.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Copy the <command>/etc/foundationdb/fdb.cluster</command> file
+ from this server to all the other servers. Restart
+ FoundationDB on all of these other servers, so they join the
+ cluster.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ All of these servers are now connected and working together in
+ the cluster, under the chosen coordinator.
+ </para>
+ </listitem>
+ </itemizedlist>
+ <para>
+ At this point, you can add as many nodes as you want by just
+ repeating the above steps. By default there will still be a single
+ coordinator: you can use <command>fdbcli</command> to change this
+ and add new coordinators.
+ </para>
+ <para>
+ As a convenience, FoundationDB can automatically assign
+ coordinators based on the redundancy mode you wish to achieve for
+ the cluster. Once all the nodes have been joined, simply set the
+ replication policy, and then issue the
+ <command>coordinators auto</command> command</