edb-deployer for normal people
Welcome
Some time ago I saw an interesting tool called edb-deployer. If You have no idea what EDB stands for
it’s Enterprise version of Postgresql. I’m not a sell person, so for me, it’s just another company
which making money around OSS - and that is good. However, I would like to tal… write about this small
CLI tool. It could be very helpful for the migration to cloud
PoC toolbox. Sounds good? Wait for the rest of the article.
Introduction
edb-deployer is a tool written in Python, which can help you build infrastructure and configure EDB’s applications on a selected cloud environment. Maybe not selected, we have 3 options right now: GCP, AWS, and Azure. In simple words, it’s a wrapper around Terraform and Ansible.
Getting started
The configuration and installation process is simple and fast. The only thing we need is python. All
tools like gcloud-cli
, aws-cli
, ansible
, terraform
will be installed and placed in a certain directory on the filesystem. Easy. If you’re interested in shell commands README is good enough.
First run
Ok, so we have a working tool. We can type edb-deployment gcloud configure -h
and we get fantastic
manual about working on GCP.
usage: edb-deployment gcloud configure [-h] [-a <ref-arch-code>] -u "<username>:<password>" [-o <operating-system>] [-t <postgres-engine-type>] [-v <postgres-version>] [-e <efm-version>] [-k <ssh-public-key-file>]
[-K <ssh-private-key-file>] [-r <cloud-region>] [-s <gcloud-spec-file>] [-c <gcloud-credentials-json-file>] -p <gcloud-project-id>
<project-name>
positional arguments:
<project-name> Terraform project name
optional arguments:
-h, --help show this help message and exit
-a <ref-arch-code>, --reference-architecture <ref-arch-code>
Reference architecture code name. Allowed values are: EDB-RA-1 for a single Postgres node deployment with one backup server and one PEM monitoring server, EDB-RA-2 for a 3 Postgres nodes deployment with quorum
base synchronous replication and automatic failover, one backup server and one PEM monitoring server, and EDB-RA-3 for extending EDB-RA-2 with 3 PgPoolII nodes. Default: EDB-RA-1
-u "<username>:<password>", --edb-credentials "<username>:<password>"
EDB Packages repository credentials.
-o <operating-system>, --os <operating-system>
Operating system. Allowed values are: CentOS7, CentOS8, RedHat7 and RedHat8. Default: CentOS8
-t <postgres-engine-type>, --pg-type <postgres-engine-type>
Postgres engine type. Allowed values are: PG for PostgreSQL, EPAS for EDB Postgres Advanced Server. Default: PG
-v <postgres-version>, --pg-version <postgres-version>
PostgreSQL or EPAS version. Allowed values are: 11, 12 and 13. Default: 13
-e <efm-version>, --efm-version <efm-version>
EDB Failover Manager version. Allowed values are: 3.10, 4.0 and 4.1. Default: 4.1
-k <ssh-public-key-file>, --ssh-pub-key <ssh-public-key-file>
SSH public key path to use. Default: /home/3sky/.ssh/id_rsa.pub
-K <ssh-private-key-file>, --ssh-private-key <ssh-private-key-file>
SSH private key path to use. Default: /home/3sky/.ssh/id_rsa
-r <cloud-region>, --gcloud-region <cloud-region>
GCloud region. Allowed values are us-central1, us-east1, us-east4, us-west1, us-west2, us-west3 and us-west4. Default: us-east1
-s <gcloud-spec-file>, --spec <gcloud-spec-file>
GCloud instances specification file, in JSON.
-c <gcloud-credentials-json-file>, --gcloud-credentials <gcloud-credentials-json-file>
GCloud credentials file (JSON) to use. Default: /home/3sky/accounts.json
-p <gcloud-project-id>, --gcloud-project-id <gcloud-project-id>
GCloud project ID
We can see all params with default values or allow values. That’s super useful. At this point I decided to use the following parameters.
edb-deployment gcloud configure edbtest \
-a EDB-RA-1 \
-o CentOS8 \
-t PG \
-v 13 \
-u "admin:admin" \
-p rozpozanieedb \
-r us-east1 \
-c /home/3sky/accounts.json
An important thing about regions. I’m based in Europe and I can’t use GCP’s EU region. That will be sad if I would like to use it for my client’s infrastructure. You know, slower access to DBs, etc.
Another information we need to provide. Username and password. If we provide an incorrect one, our Ansible playbook will fail. Be aware of that.
After this command, we’ll get a nice file structure with generated vars, manifests, and playbook.
├── ansible_vars.json
├── environments
│ ├── compute
│ │ ├── compute.tf
│ │ ├── outputs.tf
│ │ └── setup_volume.sh
│ ├── network
│ │ └── network.tf
│ └── security
│ └── firewall.tf
├── main.tf
├── playbook.yml
├── provider.tf
├── ssh_priv_key
├── ssh_pub_key
├── state.json
├── terraform.tfstate
├── terraform.tfstate.backup
├── terraform_vars.json
├── variables.tf
└── versions.tf
Terraform run
The only thing we need to do is this command
edb-deployment gcloud provision edbtest
We can also use logs
command in the saparated terminal window.
edb-deployment gcloud logs edbtest -t
In general, it’s just stdout from running commands, but in my opinion, it’s enough.
The whole step time is around 3minuts. In case of architecture code name = EDB-RA-1
.
Terraform is greatly implemented here. It created 39 objects(GCP), no errors, clear flow.
Ansible run
Another oneliner.
edb-deployment gcloud deploy edbtest
Here is a bit longer ~ 14 minutes. Roles install a lot of elements, configure 3 different purpose servers, etc. A problem is a small number of params. The whole ansible collection is quite flexible, but we can’t use these features right now. In my opinion, it will be improved. I opened one issue and maintainers are kind, helpful, and responsive.
Final output
After these 3 commands and 20 minutes, we get similar output
PEM Server: https://34.73.50.111:8443/pem
PEM User: pemadmin
PEM Password: WKMYkIndCKUJthOtIOOs
Name Public IP SSH User Private IP
============================================================
barmanserver1 34.75.6.224 edbadm 10.142.0.4
pemserver1 34.73.50.111 edbadm 10.142.0.2
primary1 35.229.56.193 edbadm 10.142.0.3
We can check our PEM server with the above credentials. We can log in to VM’s with
ssh [email protected]
I have no idea about EDB products, but the environment looks like working.
Destroy the lab
Do not forget about that. If you finish testing use
db-deployment gcloud destroy edbtest
You can spend your money in a better way.
Summary
The whole process is really easy and fast. You don’t even need to know how
to run Ansible or Terraform. However, the tool needs to be improved. There is
a lot of small bugs which could be annoying. For example, If you made a typo
in the configure
command and terraform will fail, you can’t just delete a project.
You need to delete the project directory. Passwords in playbooks are in plain text.
Only few region avalaible in GCP.
So are there any reason to use that tool even if we’re not EDB user?
Yes, many of them.
- It’s nice usage of Terraform and Ansible. We can learn how to integrate these fantastic tools, and how to use them in general.
- we can take generated files, modify them, and use them for our infrastructure.
- Ansible collection allows us to run playbooks without EDB dependences. I hope it will be available from edb-deployer in the future.
- Tool can show the power of cloud computing in the context of real infrastructure, not
hello-world
app - If we have EDB’s account that’s a great way to learn their solution.
- Also it’s fun to play with simple, good designed tools