Uploaded image for project: 'MidPoint'
  1. MidPoint
  2. MID-4414

Possible data inconsistency in midPoint



    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.9
    • Component/s: None
    • Labels:


      Following story from Arnost Starosta describes situation that happened in production environment with mp 3.6.1 deployed on Postgres DB. TL;DR When doing e.g. recompute task, mp loads DB data at the task very-start only, when another change is produced on recomputed focus in the meatime (e.g. from GUI), the recompute task may overwrite the value in undesired way.

      Possible solutions:

      • Ensure mp loads all data without DB cursors/iterators so every data being processed are fresh
      • Investigate and fix mp's relative model. (But this problem was mostly caused on single value replace)


      Problem :
      Updates in source system during a running recomputation task can get lost. Midpoint in default configuration recomputes objects (e.g. UserType) by first retrieving them ALL from repository, then passing each object to a worker thread. If the object was updated meanwhile (e.g. live-synced or updated from gui) before it is recomputed by the worker thread, this update can be overwritten by the object version retrieved when the recompute task started.

      Is your deployment affected? :
      Hard to say, i don't see any relevant log message to check. I had to check by debugging the running recompute task and verifying that SqlRepositoryServiceImpl.searchObjectsIterative calls ObjectRetriever.searchObjectsIterativeByPaging (ok) and not ObjectRetriever.searchObjectsIterativeAttempt (can loose updates).

      Deployments with MySQL or H2 backend should be ok with default configuration (check sources SqlRepositoryConfiguration.computeDefaultIterativeSearchParameters).

      Configure iterativeSearchByPaging and iterativeSearchByPagingBatchSize in config.xml midpoint/repository element. Don't know if all backends support this setting, byt postgres (which i use) does.









      After setting these parameters the objects to recompute are read in 'pages' and fed to worker threads until the request queue between the reader thread and worker threads is full, then the reader is blocked. The size of the queue is hardcoded as 2 * number-of-worker-threads.

      By setting the iterativeSearchByPagingBatchSize you can still loose updates, but the time window when this can happen shrinks from number-of-objects to max(page size, 2*num-of-worker-threads). Without much thought i set the page size to (2 * number-of-worker-threads) + 1.


          Issue Links



              martin.lizner Martin Lizner
              martin.lizner Martin Lizner
              0 Vote for this issue
              4 Start watching this issue