If you want to have a smaller batch size threshold, thenĮither you must issue commit more frequently on source side, or you can disable the commitment control on target (by setting system parameter "mirror_commit_on_transaction_boundary" to false), and set the system parameter "mirror_commit_after_max_operations" value smaller than "Batch Size Threshold", then CDC for DS will send commit based on the "Batch Size Threshold" setting, assuming the DS job will honour the commit If source side has 100000 inserts, commit every 25 inserts, and "Batch Size Threshold" is set to 10, then CDC for DS should send COMMIT message to Transaction Stage connector on DataStage server for every 25 inserts.Īnother thing to keep in mind is that, that CDC will commit on transaction boundary by default.
if source side has a batch insert of 1000 rows, and the "Batch Size Threshold" on target had been set to 100, then CDC forĭS will NOT issue 10 commits (1 commit for each 100 rows), instead onlyġ commit will be issued since source side only issued one commit for the whole 1000 inserts. Writing a UNIX script to start all jobs Ogmios. this one is my FAVORITE kind of batch jobs. A few examples have been posted on this site.
Mirror_commit_after_max_operations & mirror_commit_after_max_seconds are per subscription, at the subscription levelĪ commit will NOT be issued if the source sideĭidn't send commit to target, even if the "Batch Size Threshold" had been exceeded. Creating your own BASIC job that uses the APIs to start other DataStage jobs. Batch Threshold Size is at the table level