Slony-I enables advanced database replication capabilities. The downside is that it's not simple to set up the first time. People who have previously configured DB replications will find it easier to succeed.
According to Slony's documentation, it's not recommended to replicate across a WAN, as the slon daemon (replication daemon) is very fragile and may leave a database in a zombie state, and won't die for up to 2 hours.
psql-dtemplate1-c"alter user $PGBENCHUSER with password 'password'"psql-dtemplate1-c"alter user $PGBENCHUSER with password 'password'"-h$SLAVEHOSTpsql-dtemplate1-c"alter user $REPLICATIONUSER with password 'password'"psql-dtemplate1-c"alter user $REPLICATIONUSER with password 'password'"-h$SLAVEHOST
Creating the Databases
From now on, if you encounter password problems, add the following to your commands:
#!/bin/shslonik<<_EOF_ #-- # define the namespace the replication system uses in our example it is # slony_example #-- cluster name = $CLUSTERNAME; #-- # admin conninfo's are used by slonik to connect to the nodes one for each # node on each side of the cluster, the syntax is that of PQconnectdb in # the C-API # -- node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'; #-- # init the first node. Its id MUST be 1. This creates the schema # _$CLUSTERNAME containing all replication system specific database # objects. #-- init cluster ( id=1, comment = 'Master Node'); #-- # Because the history table does not have a primary key or other unique # constraint that could be used to identify a row, we need to add one. # The following command adds a bigint column named # _Slony-I_$CLUSTERNAME_rowID to the table. It will have a default value # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL # constraints applied. All existing rows will be initialized with a # number #-- table add key (node id = 1, fully qualified name = 'public.history'); #-- # Slony-I organizes tables into sets. The smallest unit a node can # subscribe is a set. The following commands create one set containing # all 4 pgbench tables. The master or origin of the set is node 1. #-- create set (id=1, origin=1, comment='All pgbench tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table'); set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table'); set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table'); set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial); #-- # Create the second node (the slave) tell the 2 nodes how to connect to # each other and how they should listen for events. #-- store node (id=2, comment = 'Slave node'); store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'); store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'); store listen (origin=1, provider = 1, receiver =2); store listen (origin=2, provider = 2, receiver =1);_EOF_
slon$CLUSTERNAME"dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"&# Run on the masterslon$CLUSTERNAME"dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"&# Run on the slave
Add password=password inside the quotes if you get a password error (e.g., slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST password=password").
You should now see many diagnostic messages. You can see the synchronization between nodes.
#!/bin/shslonik<<_EOF_ # ---- # This defines which namespace the replication system uses # ---- cluster name = $CLUSTERNAME; # ---- # Admin conninfo's are used by the slonik program to connect # to the node databases. So these are the PQconnectdb arguments # that connect from the administrators workstation (where # slonik is executed). # ---- node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER'; # ---- # Node 2 subscribes set 1 # ---- subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);_EOF_
As with the script above, set the permissions and run it.
An initial replication will occur at a given moment (when the script is launched). At the end of the first replication, logs will be analyzed to check for any changes between the start time and the end of replication to catch up. Then those changes will be updated as well. After that, analyses for replication happen about once every 10 seconds (depending on the load of the machines).
Now we have our two identical databases!
Verification
We can verify using this script. It will dump both databases and compare them:
#!/bin/shecho-n"**** comparing sample1 ... "psql-U$REPLICATIONUSER-h$MASTERHOST$MASTERDBNAME>dump.tmp.1.$$<<_EOF_ select 'accounts:'::text, aid, bid, abalance, filler from accounts order by aid; select 'branches:'::text, bid, bbalance, filler from branches order by bid; select 'tellers:'::text, tid, bid, tbalance, filler from tellers order by tid; select 'history:'::text, tid, bid, aid, delta, mtime, filler, "_Slony-I_${CLUSTERNAME}_rowID" from history order by "_Slony-I_${CLUSTERNAME}_rowID";_EOF_psql-U$REPLICATIONUSER-h$SLAVEHOST$SLAVEDBNAME>dump.tmp.2.$$<<_EOF_ select 'accounts:'::text, aid, bid, abalance, filler from accounts order by aid; select 'branches:'::text, bid, bbalance, filler from branches order by bid; select 'tellers:'::text, tid, bid, tbalance, filler from tellers order by tid; select 'history:'::text, tid, bid, aid, delta, mtime, filler, "_Slony-I_${CLUSTERNAME}_rowID" from history order by "_Slony-I_${CLUSTERNAME}_rowID";_EOF_ifdiffdump.tmp.1.$$dump.tmp.2.$$>$CLUSTERNAME.diff;thenecho"success - databases are equal."rmdump.tmp.?.$$rm$CLUSTERNAME.diff
elseecho"FAILED - see $CLUSTERNAME.diff for database differences"fi
Adding a New Node
-->
Here's an example of the .profile for the 3rd node to add:
createuser-A-D$PGBENCHUSER-h$SLAVE2HOSTcreateuser-s$REPLICATIONUSER-h$SLAVE2HOSTpsql-dtemplate1-c"alter user $PGBENCHUSER with password 'password'"-h$SLAVE2HOSTpsql-dtemplate1-c"alter user $REPLICATIONUSER with password 'password'"-h$SLAVE2HOSTcreatedb-O$PGBENCHUSER-h$SLAVE2HOST$SLAVEDBNAMEpg_dump-s-U$REPLICATIONUSER-h$MASTERHOST$MASTERDBNAME|psql-U$REPLICATIONUSER-h$SLAVE2HOST$SLAVEDBNAME
Then create and execute this script (adapting it to your needs):
Promoting the replica is useful when you have multiple nodes to perform maintenance on the Master, for example.
In this example, node 1 is the origin of set 1, sending set 1 information to node 2. When you add the 3rd node, you register it to set table 1. The set table is then sent to both other nodes.
Now the goal is to change the master. So node 3 gets information from node 2 rather than from node 1.
Node 3 must obviously have nodes 1 and 2 as masters in the information set. Node 2 then becomes master, because you subscribed node 2 to set table 1, and you also activated it as a forwarder of set table 1.
Fortunately, there is a direct link between nodes 2 and 3 for replication. Once node 1 is ready for reintegration, you need to change node 1's registration to slave rather than master.
Here, we'll upgrade the PostgreSQL version of our master. We'll need to switch our master to another node using the "fast upgrade using master promotion" method.
Before deciding to switch the master, you must establish a plan for switching your applications. You'll likely need to change your PostgreSQL database address, unless you're using virtual IPs in a cluster environment.
Another precaution: work with test database copies before moving to production. Remember that replicas are read-only! And one last thing, back up your databases before any major operations!!!
Each state in a Slony replication occurs because there's a new element. Important events are SYNC events, and they are synchronized to each registered node. Logs are also transmitted to nodes that accept forwarding to resynchronize the old master if needed.
To change the master set, you must be sure they can replicate properly from the new master. But before switching, sets must be locked from all modifications. Then you can move the set. Finally, the new master must register at the set level.
WARNING: Before running the script, make sure write permissions on the master database are disabled!!!