We understand the importance of timely and accurate balance information. This update provides details regarding a delay in the synchronization of beginning-of-day balances for some partner accounts that occurred on the morning of November 25, 2025.
Our beginning-of-day (BOD) job, a critical internal process responsible for calculating and updating client cash and position balances, experienced a series of failures starting early this morning.
This failure was caused by an issue in our transaction processing system. The job, which handles a large volume of data, encountered database query timeouts during the reconciliation of account balance data from the previous day. This high-load query was performing inefficiently, leading to connection closures and the subsequent failure of the balance synchronization job. Repeated manual attempts to rerun the job were also unsuccessful due to these performance issues.
The core impact was a delay in the timely synchronization of balances for a subset of our B2B partner accounts.
The issue is now fully resolved, and normal service has been restored.
Our engineering teams successfully mitigated the problem by increasing the database query transaction timeout and reducing the batch size for the balance processing job. This change allowed the job to complete the heavy data reconciliation process without timing out. Additionally, we performed targeted manual updates for a critical set of partner accounts to minimize disruption while the main job completed.
The balance synchronization job completed successfully, and all affected accounts now reflect the correct Beginning-of-Day balances for November 25, 2025.
We are committed to enhancing the stability of our data processing pipeline to prevent recurrence. Our immediate follow-up actions are focused on three core themes:
* Permanent Database Query Refactoring: Implement the identified, optimized query logic to reduce the execution time of the critical balance calculation query from minutes to under 30 seconds. This is the long-term fix for the root cause.
* Persistent Configuration: Permanently adjust the transaction timeout and batch size configurations for the balance processing job to ensure reliable execution, regardless of data volume fluctuations.
* Load and Performance Profiling: Integrate advanced profiling tools into the balance job to better identify and troubleshoot slow queries under real-time production load in the future.
* Connection Management Review: Investigate and address an observed increase in idle database connections to safeguard resource availability for critical jobs.
* Streamlined Eventing: We are finalizing the design to publish more granular system events to partners upon the completion of key processes, such as the End-of-Day \(EOD\) and Beginning-of-Day \(BOD\) balance updates, improving transparency and integration capabilities.
We apologize for the inconvenience this delay caused and thank you for your patience and understanding. Maintaining the highest level of system reliability and transparency is our top priority.