Issue:Snapshot too old

We build yugabytedb cluster in 3 nodes. We insert into data rows by Apache Spark.
After about one million rows were inserted, the db showed "

  1. org.postgresql.util.PSQLException: ERROR: Snapshot too old: Snapshot too old. Read point: { physical: 1573010951992140 }, earliest read time allowed: { physical: 1573010975727729 }, delta (usec): 23735589
  2. Read point: { physical: 1573010951992140 }, earliest read time allowed: { physical: 1573010975727729 }, delta (usec): 23735589 Call getNextException to see other errors in the batch."
    Please help me to resolve it.

@harryboot
Is it possible to get the .WARNING logs of tserver/master ?

@harryboot Thank you for reporting this issue. How are you inserting the data in the Spark program? Does every task insert multiple rows in a long-running transaction? The “snapshot too old” error could appear for long-running transactions if a compaction happens in the middle. You could try to increase the timestamp_history_retention_interval_sec flag to a large value, e.g. 3600.

Hi @harryboot

did you try the suggestions from mbautin ?