This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic.
{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}}
Requirements
-
You must have SSH access to each machine. This is necessary for distributing and starting CockroachDB binaries.
- Your network configuration must allow TCP communication on the following ports:
26257
for intra-cluster and client-cluster communication8080
to expose your Admin UI
Recommendations
-
If you plan to use CockroachDB in production, carefully review the Production Checklist.
- Consider using a secure cluster instead. Using an insecure cluster comes with risks:
- Your cluster is open to any client that can access any node's IP addresses.
- Any user, even
root
, can log in without providing a password. - Any user, connecting as
root
, can read or write any data in your cluster. - There is no network encryption or authentication, and thus no confidentiality.
-
Decide how you want to access your Admin UI:
Access Level Description Partially open Set a firewall rule to allow only specific IP addresses to communicate on port 8080
.Completely open Set a firewall rule to allow all IP addresses to communicate on port 8080
.Completely closed Set a firewall rule to disallow all communication on port 8080
. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
- All instances running CockroachDB should be members of the same Security Group.
Step 1. Configure your network
CockroachDB requires TCP communication on two ports:
26257
for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes8080
for exposing your Admin UI
You can create these rules using Security Groups' Inbound Rules.
Inter-node and load balancer-node communication
Field | Recommended Value |
---|---|
Type | Custom TCP Rule |
Protocol | TCP |
Port Range | 26257 |
Source | The name of your security group (e.g., sg-07ab277a) |
Admin UI
Field | Recommended Value |
---|---|
Type | Custom TCP Rule |
Protocol | TCP |
Port Range | 8080 |
Source | Your network's IP ranges |
Application data
Field | Recommended Value |
---|---|
Type | Custom TCP Rules |
Protocol | TCP |
Port Range | 26257 |
Source | Your application's IP ranges |
Step 2. Create instances
Create an instance for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload.
-
Run at least 3 nodes to ensure survivability.
-
Use
m
(general purpose),c
(compute-optimized), ori
(storage-optimized) instances, with SSD-backed EBS volumes or Instance Store volumes. For example, Cockroach Labs has usedm3.large
instances (2 vCPUs and 7.5 GiB of RAM per instance) for internal testing. - Do not use "burstable"
t2
instances, which limit the load on a single core.
For more details, see Hardware Recommendations and Cluster Topology.
Step 3. Synchronize clocks
CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
ntpd
should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.
-
SSH to the first machine.
-
Disable
timesyncd
, which tends to be active by default on some Linux distributions:$ sudo timedatectl set-ntp no
Verify that
timesyncd
is off:$ timedatectl
Look for
Network time on: no
orNTP enabled: no
in the output. -
Install the
ntp
package:$ sudo apt-get install ntp
-
Stop the NTP daemon:
$ sudo service ntp stop
-
Sync the machine's clock with Google's NTP service:
$ sudo ntpd -b time.google.com
To make this change permanent, in the
/etc/ntp.conf
file, remove or comment out any lines starting withserver
orpool
and add the following lines:server time1.google.com iburst server time2.google.com iburst server time3.google.com iburst server time4.google.com iburst
Restart the NTP daemon:
$ sudo service ntp start
{{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}}
-
Verify that the machine is using a Google NTP server:
$ sudo ntpq -p
The active NTP server will be marked with an asterisk.
- Repeat these steps for each machine where a CockroachDB node will run.
Compute Engine instances are preconfigured to use NTP, which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as pool.ntp.org
, will handle the leap second. Therefore, you should:
- Configure each GCE instances to use Google's internal NTP service.
- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, configure the non-GCE machines to use Google's external NTP service.
Amazon provides the Amazon Time Sync Service, which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.
- If you plan to run your entire cluster on AWS, configure each AWS instance to use the internal Amazon Time Sync Service.
- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, configure all machines to use Google's external NTP service, which is comparably accurate and also handles "smearing" the leap second.
ntpd
should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run ntpd
properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
-
SSH to the first machine.
-
Find the ID of the Hyper-V Time Synchronization device:
$ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
$ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee Rel_ID=12, target_cpu=0
-
Unbind the device, using the
Device_ID
from the previous command's output:$ echo <DEVICE_ID> | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind
-
Install the
ntp
package:$ sudo apt-get install ntp
-
Stop the NTP daemon:
$ sudo service ntp stop
-
Sync the machine's clock with Google's NTP service:
$ sudo ntpd -b time.google.com
To make this change permanent, in the
/etc/ntp.conf
file, remove or comment out any lines starting withserver
orpool
and add the following lines:server time1.google.com iburst server time2.google.com iburst server time3.google.com iburst server time4.google.com iburst
Restart the NTP daemon:
$ sudo service ntp start
{{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}}
-
Verify that the machine is using a Google NTP server:
$ sudo ntpq -p
The active NTP server will be marked with an asterisk.
- Repeat these steps for each machine where a CockroachDB node will run.
Step 4. Set up load balancing
Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:
-
Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).
- Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.
AWS offers fully-managed load balancing to distribute traffic between instances.
- Add AWS load balancing. Be sure to:
- Set forwarding rules to route TCP traffic from the load balancer's port 26257 to port 26257 on the nodes.
- Configure health checks to use HTTP port 8080 and path
/health?ready=1
. This health endpoint ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests.
- Note the provisioned IP Address for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster.
{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}}
Step 5. Start nodes
You can start the nodes manually or automate the process using systemd.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. `--cache`
`--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](start-a-node.html). 5. Repeat these steps for each additional node that you want in your cluster.
Step 6. Initialize the cluster
On your local machine, complete the node startup process and have them join together as a cluster:
-
Install CockroachDB on your local machine, if you haven't already.
-
Run the
cockroach init
command, with the--host
flag set to the address of any node:$ cockroach init --insecure --host=<address of any node>
Each node then prints helpful details to the standard output, such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
Step 7. Test the cluster
CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.
To test this, use the built-in SQL client locally as follows:
-
On your local machine, launch the built-in SQL client, with the
--host
flag set to the address of any node:$ cockroach sql --insecure --host=<address of any node>
-
Create an
insecurenodetest
database:> CREATE DATABASE insecurenodetest;
-
Use
\q
orctrl-d
to exit the SQL shell. -
Launch the built-in SQL client, with the
--host
flag set to the address of a different node:$ cockroach sql --insecure --host=<address of different node>
-
View the cluster's databases, which will include
insecurenodetest
:> SHOW DATABASES;
+--------------------+ | Database | +--------------------+ | crdb_internal | | information_schema | | insecurenodetest | | pg_catalog | | system | +--------------------+ (5 rows)
- Use
\q
to exit the SQL shell.
Step 8. Run a sample workload
CockroachDB offers a pre-built workload
binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the TPC-C workload.
{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
-
SSH to the machine where you want the run the sample TPC-C workload.
This should be a machine that is not running a CockroachDB node.
-
Download
workload
and make it executable:$ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
-
Rename and copy
workload
into thePATH
:$ cp -i workload.LATEST /usr/local/bin/workload
-
Start the TPC-C workload, pointing it at the IP address of the load balancer:
$ workload run tpcc \ --drop \ --init \ --duration=20m \ --tolerate-errors \ "postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=disable"
This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
{{site.data.alerts.callout_success}}For more
tpcc
options, useworkload run tpcc --help
. For details about other load generators included inworkload
, useworkload run --help
. -
To monitor the load generator's progress, open the Admin UI by pointing a browser to the address in the
admin
field in the standard output of any node on startup.Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click Metrics on the left, select the SQL dashboard, and then check the SQL Connections graph. You can use the Graph menu to filter the graph for specific nodes.
Step 9. Set up monitoring and alerting
Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
For details about available monitoring options and the most important events and metrics to alert on, see Monitoring and Alerting.
Step 10. Scale the cluster
You can start the nodes manually or automate the process using systemd.
Step 11. Use the cluster
Now that your deployment is working, you can:
- Implement your data model.
- Create users and grant them privileges.
- Connect your application. Be sure to connect your application to the AWS load balancer, not to a CockroachDB node.