Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
cloudappliance/solrreplication [2020_03_06 21:20] – adding failover/failback info eric | cloudappliance:solrreplication [2023_04_18 14:31] – small cleanup eric | ||
---|---|---|---|
Line 1: | Line 1: | ||
===== Solr Replication for Highly Available EFF Content Search ===== | ===== Solr Replication for Highly Available EFF Content Search ===== | ||
- | == last updated | + | == last updated |
==== Disclaimer ==== | ==== Disclaimer ==== | ||
The information in this document is provided on an as-is basis. You use it at your own risk. We accept no responsibility for errors or omissions, nor do we have any obligation to provide support for implementing or maintaining the configuration described here. Furthermore, | The information in this document is provided on an as-is basis. You use it at your own risk. We accept no responsibility for errors or omissions, nor do we have any obligation to provide support for implementing or maintaining the configuration described here. Furthermore, | ||
- | SME designs, implements and supports HA File Fabric solutions for customers on a paid professional services basis. For more information please contact sales@storagemadeeasy.com | + | SME designs, implements and supports HA File Fabric solutions for customers on a paid professional services basis. For more information please contact |
<WRAP center round important 100%> | <WRAP center round important 100%> | ||
Line 13: | Line 13: | ||
- | The SME Cloud Control appliance | + | The Enterprise File Fabric |
- | This guide will step through the setup of a Master-Slave Solr database pair, which allows for automatic failover without any loss of data. When the master | + | This guide will step through the setup of a Leader-Follower |
==== Part 1 ==== | ==== Part 1 ==== | ||
Line 58: | Line 58: | ||
=== Configuring the Solr === | === Configuring the Solr === | ||
- | You must perform these steps to create a specialized | + | You must perform these steps to create a specialized |
- | === Restrict external access | + | === Install Solr Replica Containers |
- | The solr server does not serve web pages and does not need to be accessible from outside WAN. The only traffic you need to allow is TCP port 7070 from all web frontend servers. | + | The standard |
+ | ``` | ||
+ | yum install sme-containers-solr-replicas | ||
+ | ``` | ||
- | === iptables for dbservers === | + | We can then stop the existing solr container and start up the replias version |
- | On both smesql01 and smesql02, you must update iptables to allow incoming connections to mariadb, do the following. | + | ``` |
+ | cd / | ||
+ | ``` | ||
- | As root: | + | After finishing configuration we will start up the new replicas version |
- | + | ||
- | < | + | |
- | iptables-save > / | + | |
- | ipt_line=`iptables -L RH-Firewall-1-INPUT -n --line-numbers | grep REJECT | awk ' | + | |
- | insert_line=`expr $ipt_line - 1` | + | |
- | iptables -I RH-Firewall-1-INPUT $insert_line -p tcp -m state --state NEW -m tcp --dport 7070 -j ACCEPT | + | |
- | iptables-save > / | + | |
- | </ | + | |
=== Solr configuration for HA === | === Solr configuration for HA === | ||
== Solr Database Configuration == | == Solr Database Configuration == | ||
- | The settings for Solr startup are stored in / | ||
- | We will perform this on both smesql01 and smesql02. | ||
- | |||
- | < | ||
- | jetty.host=0.0.0.0 | ||
- | </ | ||
- | |||
- | |||
- | Then restart solr for this change to take effect | ||
- | |||
- | < | ||
- | systemctl restart jetty | ||
- | </ | ||
=== Configure Database Replication === | === Configure Database Replication === | ||
Line 98: | Line 82: | ||
== Update Configuration to enable ReplicationHandler == | == Update Configuration to enable ReplicationHandler == | ||
- | We will edit the following file in order to turn on the Replication Handler within | + | We will edit the following file in order to turn on the Replication Handler within |
- | Add the following into this file: /smedata/sme_solr/ | + | Add the following into this file: /var/solr/data/ |
Add this after the "< | Add this after the "< | ||
Line 111: | Line 95: | ||
| | ||
based on the path specified in the request. | based on the path specified in the request. | ||
- | |||
- | | ||
- | | ||
- | the requestDispatcher, | ||
- | the qt parameter. | ||
- | like so: http:// | ||
- | | ||
- | used or the one named " | ||
If a Request Handler is declared with startup=" | If a Request Handler is declared with startup=" | ||
Line 126: | Line 102: | ||
< | < | ||
- | <lst name="master"> | + | <lst name="leader"> |
- | <str name=" | + | <str name=" |
< | < | ||
<str name=" | <str name=" | ||
<str name=" | <str name=" | ||
- | + | ||
- | <!--The default value of reservation is 10 secs.See the documentation below . Normally , you should not need to specify this --> | + | |
- | <str name=" | + | |
</ | </ | ||
- | <lst name="slave"> | + | <lst name="follower"> |
- | <str name=" | + | <str name=" |
- | < | + | < |
- | <str name="masterUrl"> | + | <str name="leaderUrl"> |
- | < | + | < |
- | | + | But a fetchindex can be triggered from the admin or the http API --> |
<str name=" | <str name=" | ||
- | <!--The following values are used when the slave connects to the master | + | <!--The following values are used when the follower |
- | | + | Default values implicitly set as 5000ms and 10000ms respectively. The user DOES NOT need to specify |
these unless the bandwidth is extremely low or if there is an extremely high latency--> | these unless the bandwidth is extremely low or if there is an extremely high latency--> | ||
<str name=" | <str name=" | ||
<str name=" | <str name=" | ||
- | <!-- If HTTP Basic authentication is enabled on the master, then the slave can be configured with the following --> | + | <!-- If HTTP Basic authentication is enabled on the leader, then the follower |
<str name=" | <str name=" | ||
<str name=" | <str name=" | ||
Line 159: | Line 133: | ||
</ | </ | ||
</ | </ | ||
- | Please note the use of the smesearch dns name for masterUrl. If you have a different dns name please update the above configuration accordingly. | + | Please note the use of the smesearch dns name for leaderUrl. If you have a different dns name please update the above configuration accordingly. |
- | == Define | + | == Define |
- | Each solr instance is configured to be able to act as either a master | + | Each Solr instance is configured to be able to act as either a leader |
- | On smesql01 to make it master | + | On smesql01 to make it leader |
< | < | ||
- | enable.master=true | + | enable.leader=true |
- | enable.slave=false | + | enable.follower=false |
</ | </ | ||
- | On smesql01 | + | On smesql02 |
< | < | ||
- | enable.master=false | + | enable.leader=false |
- | enable.slave=true | + | enable.follower=true |
</ | </ | ||
- | Finally, | + | == Allow replication whitelist == |
+ | Next we will configuration the whitelist to allow the solr containers | ||
+ | We will edit / | ||
< | < | ||
- | systemctl restart jetty | + | < |
+ | class=" | ||
+ | <int name=" | ||
+ | <int name=" | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | Replacing smesql01/02 with their respective ip addresses. | ||
+ | |||
+ | == Start solr containers == | ||
+ | Finally, we will start the Solr replica containers on both hosts in order to have those changes take effect: | ||
+ | |||
+ | < | ||
+ | cd / | ||
</ | </ | ||
Line 388: | Line 377: | ||
== Restart Keepalived == | == Restart Keepalived == | ||
We will now restart keepalived to apply the new configuration. | We will now restart keepalived to apply the new configuration. | ||
- | If this is a running production | + | If this is a running production |
< | < | ||
Line 437: | Line 426: | ||
Settings > Search Integrations | Settings > Search Integrations | ||
- | Replace the solr uri as follows: | + | Replace the Solr uri as follows: |
< | < | ||
- | http:// | + | http:// |
</ | </ | ||
Line 448: | Line 437: | ||
=== Failover and Recovery === | === Failover and Recovery === | ||
- | In the case of an outage of the solr service, or the smesql01 service, keepalived will fail over traffic to the solr instance running on smesql02. | + | In the case of an outage of the Solr service, or the smesql01 service, keepalived will fail over traffic to the Solr instance running on smesql02. |
- | This process is all automatic. All new solr read and writes will now occur on the smesql02 server without any intervention. | + | This process is all automatic. All new Solr read and writes will now occur on the smesql02 server without any intervention. |
- | However, when smesq01 server/solr service is available again, we will not fail traffic back over in the other direction, as the smesql01 database will NOT contain any of the new indexes created during the outage. Solr replication is setup to run in only one direction at a time, so unlike the mysql setup, the smesql01 | + | However, when smesq01 server/solr service is available again, we will not fail traffic back over in the other direction, as the smesql01 database will NOT contain any of the new indexes created during the outage. Solr replication is setup to run in only one direction at a time, so unlike the mysql setup, the smesql01 |
- | In order to get the master | + | In order to get the leader |
- | On smesql01 we will update the /smedata/sme_solr/ | + | On smesql01 we will update the /var/solr/data/ |
< | < | ||
- | enable.master=false | + | enable.leader=false |
- | enable.slave=true | + | enable.follower=true |
</ | </ | ||
- | On smesql02 /smedata/sme_solr/ | + | On smesql02 /var/solr/data/ |
< | < | ||
- | enable.master=true | + | enable.leader=true |
- | enable.slave=false | + | enable.follower=false |
</ | </ | ||
- | Finally, we will restart | + | Finally, we will restart |
< | < | ||
- | systemctl restart jetty | + | cd / |
</ | </ | ||
- | This will switch the status and start replicating data from smesql02 (the new master) over to smesql01 (new slave). | + | This will switch the status and start replicating data from smesql02 (the new leader) over to smesql01 (new follower). |
+ | |||
+ | We can check replication status on the hosts via this webpage: | ||
+ | ``` | ||
+ | http://< | ||
- | We can check replicatin status on the hosts via this webpage: | + | ``` |
- | http://< | + | |
Once both have the same Version/Gen number they are in sync. | Once both have the same Version/Gen number they are in sync. | ||
- | From there you can then leave the hosts in their current | + | From there you can then leave the hosts in their current |
- | Do not fail back over the keepalived vip until replication is back in sync and you make this update to make smesql01 | + | Do not fail back over the keepalived vip until replication is back in sync and you make this update to make smesql01 |