[midPoint] Clustered Configuration - Communication error

Matt Widhalm matthewwidhalm at weber.edu
Sat Feb 5 00:28:40 CET 2022


Hi Pavol,

That worked! All of my nodes are now showing as "running".

Thank you!

Matt Widhalm


On Tue, Feb 1, 2022 at 4:55 PM Pavol Mederly via midPoint <
midpoint at lists.evolveum.com> wrote:

> Matt,
>
> most probably your keystores (keystore.jceks) on individual nodes are not
> synchronized. They must be the same in order for the authentication - or
> any password-related operations - to work reliably.
>
> Take the keystore from the node where authentication works, and put it
> onto the other nodes.
>
> Best regards,
>
> --
> Pavol Mederly
> Software developerevolveum.com
>
> On 02/02/2022 00:53, Matt Widhalm via midPoint wrote:
>
> I am working on this one again. While trying to login to node_b I
> am seeing the following error: Invalid username and/or password. This is
> while using the administrator account. I have verified the password is
> correct as I am able to login to node_a with the credentials.
>
> I have also tried to login using the emergency admin url and I am seeing
> this error: Currently we are unable to process your request. Kindly try
> again later.
>
> Has anyone run into similar issues while trying to set up a
> clustered configuration?
>
> Any help would be greatly appreciated.
>
> Thank you,
> Matt Widhalm
>
>
> On Wed, Jan 12, 2022 at 11:47 AM Matt Widhalm <matthewwidhalm at weber.edu>
> wrote:
>
>> Good morning. I am attempting to set up a clustered environment utilizing
>> 2 nodes. The issue I am running into is that while node_a is in a running
>> state, node_b is showing Communication error. I have verified
>> communication between the two docker containers is working (I can ping
>> node_b from node_a and vice versa). Below is a section of my config.xml for
>> node_a. node_b is the same with the nodeId reflecting that it is node_b.
>>
>>         <repository>
>>             <type>native</type>
>>
>> <jdbcUrl>jdbc:postgresql://<censored>:5432/midpoint_dev</jdbcUrl>
>>             <jdbcUsername><censored></jdbcUsername>
>>             <jdbcPassword><censored></jdbcPassword>
>>             <missingSchemaAction>create</missingSchemaAction>
>>             <baseDir>${midpoint.home}</baseDir>
>>             <asServer>true</asServer>
>>         </repository>
>>         <nodeId>node_a</nodeId>
>>         <taskManager>
>>             <clustered>true</clustered>
>>         </taskManager>
>>
>> On the All Node page they are showing as:
>>
>> Name: node_a
>> Status: Running
>> Contact: http://<CONTAINER ID>:8080/midpoint
>> Clustered: checked
>> Status message:
>>
>> Name: node_b
>> Status: Communication error
>> Contact: http://<CONTAINER ID>:8080/midpoint
>> Clustered: checked
>> Status message: Node not known at this moment
>>
>> I have tried to change the httpPort and url in the config.xml with no
>> change to the Communication error. I also have the containers on their own
>> network.
>>
>> Any help would be appreciated.
>>
>> Thank you,
>>
>> Matt Widhalm
>>
>> System Engineer
>>
>> Weber State University
>>
>
> _______________________________________________
> midPoint mailing listmidPoint at lists.evolveum.comhttps://lists.evolveum.com/mailman/listinfo/midpoint
>
> _______________________________________________
> midPoint mailing list
> midPoint at lists.evolveum.com
> https://lists.evolveum.com/mailman/listinfo/midpoint
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.evolveum.com/pipermail/midpoint/attachments/20220204/2eeb57d5/attachment-0001.htm>


More information about the midPoint mailing list