Uploaded image for project: 'MidPoint'
  1. MidPoint
  2. MID-6819

Too many failed connections to target system can bring node down (OutOfMemory)

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Cannot Reproduce
    • Affects Version/s: 4.2
    • Fix Version/s: 4.3
    • Component/s: None
    • Labels:
    • Subscription:
      Active subscription
    • Backport Version:
      4.3

      Description

      Customer has 4-node cluster.
      One of the target system is Oracle connected using DBTable connector.
      The connector account is locked because there is a maintenance on this system.
      For some time midPoint was printing the messages to the log about the account being locked.
      Eventually we managed to put the resource to maintenance state.

      But several hours later the node went out of the cluster and stopped - because OutOfMemory.

      There are multiple livesync tasks and segmented reconcilations running across the whole cluster.

      The problematic node was working (based on logs) quite OK after the problematic resource was put to maintenance state (around 17:05).

      Then at 20:23 there was the following message:

      WARN (org.quartz.impl.jdbcstore.JobStoreTX): This scheduler instance (XXXX) is still active but was recovered by another instance in the cluster.  This may cause inconsistent behaviour
      

      The same message repeated several times after several minutes.

      Then the show stopped with:

      [REPOSITORY] [...] ERROR (com.evolveum.midpoint.repo.sql.helpers.ObjectRetriever): Couldn't parse object ShadowType (oid here): java.lang.OutOfMemoryError: GC overhead limit exceeded
      

      This was further followed by another "This scheduler instance ... is still active but was recovered by another instance in the Cluster. This may cause inconsistent behaviour"

      I think that we must have some memory leak related to the (unsuccessful) connections from midPoint to the connector/target system.

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              vix Ivan Noris
              Reporter:
              vix Ivan Noris
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: