[midPoint] DOCKERED MIDPONT (2 NODES) RUNNING ON AN ORACLE CLUSTER

Fabian Bosch fabian.bosch at daasi.de
Tue Apr 24 14:40:06 CEST 2018


Same here with mariaDB and midPoint v3.7.1.
Any solution to this?

regards

Fabian

-- Midpoint 3.7.1 --

[embedded Tomcat opts (midpoint.sh)]

    JAVA_OPTS="$JAVA_OPTS
    -Xms2048M
    -Xmx2048M
    -Dpython.cachedir=$MIDPOINT_HOME/tmp
    -Djavax.net.ssl.trustStore=$MIDPOINT_HOME/keystore.jceks
    -Djavax.net.ssl.trustStoreType=jceks
    -Dmidpoint.home=$MIDPOINT_HOME
    -Dmidpoint.nodeId=NodeA
    -Dcom.sun.management.jmxremote.port=20001
    -Dcom.sun.management.jmxremote.rmi.port=20001
    -Dcom.sun.management.jmxremote.ssl=false
    -Dcom.sun.management.jmxremote.password.file=/opt/midpoint/midpoint-3.7-home/jmxremote.password
    -Dcom.sun.management.jmxremote.access.file=/opt/midpoint/midpoint-3.7-home/jmxremote.access"

[MidPoint-Config]

           <repository>
<repositoryServiceFactoryClass>com.evolveum.midpoint.repo.sql.SqlRepositoryFactory</repositoryServiceFactoryClass>
           <baseDir>${midpoint.home}</baseDir>
           <embedded>false</embedded>
           <asServer>true</asServer>
<driverClassName>org.mariadb.jdbc.Driver</driverClassName>
           <jdbcUsername>midpoint</jdbcUsername>
           <jdbcPassword>secret</jdbcPassword>
<jdbcUrl>jdbc:mariadb://midpoint.remote.tld:3306/midpoint?characterEncoding=utf-8;LOCK_MODE=1;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000</jdbcUrl>
<hibernateDialect>com.evolveum.midpoint.repo.sql.util.MidPointMySQLDialect</hibernateDialect>
<hibernateHbm2ddl>validate</hibernateHbm2ddl>
         </repository>
         <taskManager>
           <clustered>true</clustered>
           <jmxUsername>midpoint</jmxUsername>
           <jmxPassword>secret</jmxPassword>
         </taskManager>


Am 24.10.2017 um 21:51 schrieb Carlos Ferreira:
> Hi,
>
> 1. I have downloaded the evolveum/midpoint image from docker hub;
>
> 2. I have create 2 containers, each one running MIDPOINT in 2 separate 
> servers;
>
> 3. I configured the Config.xml file (in both nodes):
>
> *********************** /var/opt/midpoint/config.xml 
> ****************************************************
>
> (...)
>
> <configuration>
>     <midpoint>
>         <webApplication>
> <importFolder>${midpoint.home}/import</importFolder>
>         </webApplication>
>     <repository>
> <repositoryServiceFactoryClass>com.evolveum.midpoint.repo.sql.SqlRepositoryFactory</repositoryServiceFactoryClass>
>       <baseDir>${midpoint.home}</baseDir>
>       <embedded>false</embedded>
>       <asServer>true</asServer>
>       <database>oracle</database>
>       <jdbcUsername>midpoint_wi</jdbcUsername>
>       <jdbcPassword>secret</jdbcPassword>
>       <jdbcUrl>jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = 
> TCP)(HOST = orarac.trt)(PORT = 1521)) (LOAD_BALANCE = yes) 
> (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prod.trt3.jus.br 
> <http://prod.trt3.jus.br>)))</jdbcUrl>
>     </repository>
> <taskManager>
>   <clustered>true</clustered>
>   <jmxUsername>midpoint</jmxUsername>
>   <jmxPassword>secret</jmxPassword>
> </taskManager>
>
> (...)
> *********************** /var/opt/midpoint/config.xml 
> ****************************************************
>
>
> ----->>> midpoint was configured to access an ORACLE DATABASE running 
> in a CLUSTER;
>
>
> 4. The setenv.sh (/usr/local/tomcat/bin/setenv.sh) file has been 
> configured as follows:
>
>   a) on node A
> CATALINA_OPTS="-Dmidpoint.nodeId=NodeA 
> -Dmidpoint.home=/var/opt/midpoint/ -Dcom.sun.management.jmxremote=true 
> -Dmidpoint.jmxHostName=10.3.190.47 -Dcom.sun.management.jmx
> remote.port=20001 -Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.password.file=/var/opt/midpoint/jmxremote.password 
> -Dcom.sun.management.jmxremo
> te.access.file=/var/opt/midpoint/jmxremote.access"
>
>   b) on node B
> CATALINA_OPTS="-Dmidpoint.nodeId=NodeB 
> -Dmidpoint.home=/var/opt/midpoint/ -Dcom.sun.management.jmxremote=true 
> -Dmidpoint.jmxHostName=10.3.190.79 -Dcom.sun.management.jmx
> remote.port=20002 -Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.password.file=/var/opt/midpoint/jmxremote.password 
> -Dcom.sun.management.jmxremo
> te.access.file=/var/opt/midpoint/jmxremote.access"
>
>
> 3. I have run the script that creates on necessary objects (tables, 
> etc) in the oracle database;
>
> 4. When I start the first node (A, for example), I am able to log in;
>
> 5. When I try to login in in the second node (B), I receive the 
> following message:
>
> " Currently we are unable to process your request. Kindly try again 
> later."
>
> 6. In the "idm.log" file, I have the message:
>
> 2017-10-24 19:35:05,771 [] 
> [QuartzScheduler_midPointScheduler-NodeB_ClusterManager] WARN 
> (org.quartz.impl.jdbcjobstore.JobStoreTX): This scheduler instance 
> (NodeB) is still active but was recovered by another instance in the 
> cluster.  This may cause inconsistent behavior.
>
> 7. If I drop all the objects and re-execute the install script and try 
> to login firstly from node B, I am successful. Nevertheless, I receive 
> the same error message when trying to log in from node A.
>
>
> Did I miss anything?
>
>
> Thks,
>
> Carlos A Ferreira
>
>
> _______________________________________________
> midPoint mailing list
> midPoint at lists.evolveum.com
> http://lists.evolveum.com/mailman/listinfo/midpoint

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.evolveum.com/pipermail/midpoint/attachments/20180424/2f3b3aee/attachment.htm>


More information about the midPoint mailing list