|
1 | 1 | PostgreSQL
|
2 | 2 | ==========
|
3 | 3 |
|
4 |
| -The Python Infrastructure offers PostgreSQL databases to services hosted in the |
5 |
| -Rackspace datacenter. |
| 4 | +The Python Infrastructure uses PostgreSQL databases to services hosted in the |
| 5 | +DigitalOcean datacenter. |
6 | 6 |
|
| 7 | +* Currently running hosted PostgreSQL 11 provided by DigitalOcean databases. |
7 | 8 |
|
8 |
| -* Currently running PostgreSQL 9.4 |
9 |
| - |
10 |
| -* Operates a 2 node cluster with a primary node configured with streaming |
11 |
| - replication to a replica node. |
12 |
| - |
13 |
| - * Each node is running a 15 GB Rackspace Cloud Server. |
14 |
| - |
15 |
| -* Each app node has pgbouncer running on it pooling connections. |
| 9 | +* App nodes have pgbouncer running on it pooling connections. |
16 | 10 |
|
17 | 11 | * The actual database user and password is only known to pgbouncer, each
|
18 | 12 | node will get a unique randomly generated password for the app to connect
|
19 | 13 | to pgbouncer.
|
20 | 14 |
|
21 |
| -* The primary node also backs up to Rackspace CloudFiles in the ORD region |
22 |
| - via WAL-E. A full backup is done once a week via a cronjob and WAL-E does |
23 |
| - WAL pushes to fill in between the full backups. |
24 | 15 |
|
| 16 | +Local Tooling |
| 17 | +------------- |
| 18 | + |
| 19 | +For roles which require postgresql, the ``postgresql-primary`` vagrant machine |
| 20 | +can be booted to provide similar infrastructure to the DigitalOcean hosted |
| 21 | +Postgres. |
25 | 22 |
|
26 | 23 |
|
27 | 24 | Creating a New Database
|
@@ -80,43 +77,3 @@ Giving Applications Access
|
80 | 77 | },
|
81 | 78 | },
|
82 | 79 | }
|
83 |
| -
|
84 |
| -
|
85 |
| -Application Integration |
86 |
| ------------------------ |
87 |
| - |
88 |
| -The PostgreSQL has been configured to allow an application to integrate with it |
89 |
| -to get some advanced features. |
90 |
| - |
91 |
| - |
92 |
| -(A)synchronous Commit |
93 |
| -~~~~~~~~~~~~~~~~~~~~~ |
94 |
| - |
95 |
| -By default the PostgreSQL primary will ensure that each transaction is commited |
96 |
| -to persistent storage on the local disk before returning that a transaction |
97 |
| -has successfully been commited. However it will asynchronously replicate that |
98 |
| -transaction to the replicas. This means that if the primary server goes down |
99 |
| -in a way where the disk is not recoverable prior to replication occuring than |
100 |
| -that data will be lost. |
101 |
| - |
102 |
| -Applications may optionally, on a per transaction basis, request that the |
103 |
| -primary server has either given the data to a replica server or that a replica |
104 |
| -server has also written that data to persistent storage. |
105 |
| - |
106 |
| -This can be acchived by executing: |
107 |
| - |
108 |
| -.. code-block:: plpgsql |
109 |
| -
|
110 |
| - -- Set the transaction so that a replica will have received the data, but |
111 |
| - -- not written the data out before the primary says the transaction is |
112 |
| - -- complete. |
113 |
| - SET LOCAL synchronous_commit TO remote_write; |
114 |
| -
|
115 |
| - -- Set the transaction so that a replica will have written the data to |
116 |
| - -- persistent storage before the primary says the transaction is complete. |
117 |
| - SET LOCAL synchronous_commit TO on; |
118 |
| -
|
119 |
| -Obviously each of these options will mean the write will fail if the primary |
120 |
| -cannot reach the replica server. These options can be used when ensuring data |
121 |
| -is saved is more important than uptime with the minimal risk the primary goes |
122 |
| -completely unrecoverable. |
0 commit comments