The following steps describe how to deploy Drastic to a remote (or local) server.
[ Note, if you intend to run multiple servers, know beforehand the details of the networks, as it is best to set up the system with the ‘real’ network rather than using localhost. ]
Ansible expects a user drastic to exist with sudo rights.
sudo adduser drastic sudo usermod -G sudo,adm drastic # If you want to propagate ssh certificates and simplify ssh access sudo mkdir ~drastic/.ssh sudo cat ~/.ssh/authorized_keys >> ~drastic/.ssh/authorized_keys
Needed package for ansible:
sudo apt-get install python
- Make sure you have access to the server via SSH, either directly or via a proxy.
– Install Ansible and GIT on the host from which you are installing (not the target).
sudo apt-get install software-properties-common sudo apt-add-repository ppa:ansible/ansible sudo apt-get update sudo apt-get install ansible git
– Fetch this package
git clone <URL copied from above>
By default the user account on the server should be ‘drastic’ who should have sudo access. If different then the user field in deploy-standalone.yml should be changed.
Cassandra stores it data by default in /var/lib/cassandra – this should be redirected to an appropriate storage volume, either via a symbolic link or using something like
mkdir /var/lib/cassandra mount --bind <target> /var/lib/cassandra mkdir /var/lib/cassandra/data # Ensure that permissions and ownership are appropriately set chown -R casssandra:cassandra /var/lib/cassandra
Precaution: If you are re-installing then it is probably worth doing a complete removal of cassandra e.g.
sudo apt-get purge cassandra
hostsfile containing the IP address of the servers. It should look like..
[drastic-databases] node1 [drastic-webservers] node1 [drastic:children] drastic-databases drastic-webservers
Where the host name of each machine should be accessible through ssh (~/.ssh/config file).
Some examples are present in the
- Create an host_vars file for each machine in the host_vars directory
For each [drastic-databases] you must provide the Ethernet interface on which Cassandra communicates. Usually this is eth0, but it may be something different on more complex topologies (It changed in Ubuntu > 15.10).
- The default behavior is to use HTTP. If needed this can be changed in the webservers.yml file with the variable https_mode. The nginx server is using the SSL certificate in /etc/nginx/ssl/nginx.crt. If they are not present a self-signed version is created during deployment.
Run the deployment with the following command:
ansible-playbook deploy_standalone.yml -i staging/hosts --ask-become-pass
Once started the script will ask for some details. The sudo-password for the user specified in deploy-standalone.yml, your bitbucket username and bitbucket password. This is so that the script can retrieve the code from the private repositories.
Make a cup of tea.
Unfortunately Cassandra can take a while to start. The process list will show the Java process running although Cassandra is still not available. To resolve this the script pauses once the Cassandra installation is complete.
On some occasions the drastic-node fails to start after installation. This is being investigated.
** As a temporary workaround execute
sudo service drastic-web start
on the target machine if the web-service fails to work. **
Post install tasks
See this LINK for full details, but the short version is
ssh [email protected]<target> export DRASTIC_CONFIG=settings source ~/web/bin/activate drastic user-create # and you probably want to create a group or two .... especially since you need to for ingesting drastic group_create <group_name> <user_name_that_owns_group>
If you get an error on logging in then on the target machine
deactivate . /usr/lib/drastic/web/bin/activate cd /usr/lib/drastic/web/project sudo ../bin/python manage.py syncdb And just choose No if it asks any questions...
The file /etc/init/drastic-agent needs to have these lines ( the last one should be there, the first two may not ).
env CQLENG_ALLOW_SCHEMA_MANAGEMENT=1 env AGENT_CONFIG=/usr/lib/drastic/agent/project/agent.config exec /usr/lib/drastic/agent/bin/python /usr/lib/drastic/agent/project/wsgi....
Cluster set up.
[[[ **note: To deploy to multiple servers or add a server achieved – In order to do this then the installation needs to know - the interface of the Cassandra network , i.e. the one that Cassandra uses to coordinate - the address of at least one pre-existing node, … since the cluster has to have something to Join - the name of the Cassandra cluster.
This should now be in the installation scripts, but the nuts and bolts is that in /etc/cassandra/cassandra.yaml
auto_bootstrap: true listen_interface: eth<n> broadcast_address:
- in the seed_provider: stanza add the addresses of known members
ensure the clustername is the same
on the new node node,stop Cassandra ( service cassandra stop ), remove everything from the data directory ( /var/lib/cassandra/data/ ),
Restart Cassandra, leave it to sort itself out by waiting for all the nodes to show up in the output from the command line
nodetool statusand be in state UN and then run
nodetool cleanupon all the nodes in the cluster ( which may take som time, so perhaps best done in TMUX). ]
There is a Vagrantfile with this repo that enables you to install Drastic to a virtual machine with minimum fuss. This is meant only for development and everything resides locally, so if you’re deploying on a production machine, use the usual Ansible instructions below.
First, get Vagrant:
sudo apt-get install vagrant
You will also need Ansible - get it as described below.
Then in the root of this repo, run
vagrant up If this is the first time you’ve run this command, it will go and fetch
a Vagrant “box”, which is just a VM base image. In this case it’s a bare-bones Ubuntu Trusty 64-bit image.
Vagrant will then go and run the Ansible script automatically, asking all the usual questions. If you change the
Ansible provisioning scripts, just run
vagrant provision to re-run Ansible.
To pause the VM, run
vagrant suspend and to resume it
vagrant halt will shut the VM down
vagrant up will restart it. Finally,
vagrant destroy will shut down and delete the VM.
There is a default user on the VM called
vagrant and the password is
vagrant, so you can ssh into it as normal.