We build yugabytedb cluster in 3 nodes. We insert into data rows by Apache Spark.
After about one million rows were inserted, the db showed "
- org.postgresql.util.PSQLException: ERROR: Snapshot too old: Snapshot too old. Read point: { physical: 1573010951992140 }, earliest read time allowed: { physical: 1573010975727729 }, delta (usec): 23735589
- Read point: { physical: 1573010951992140 }, earliest read time allowed: { physical: 1573010975727729 }, delta (usec): 23735589 Call getNextException to see other errors in the batch."
Please help me to resolve it.
@harryboot
Is it possible to get the .WARNING logs of tserver/master ?
@harryboot Thank you for reporting this issue. How are you inserting the data in the Spark program? Does every task insert multiple rows in a long-running transaction? The “snapshot too old” error could appear for long-running transactions if a compaction happens in the middle. You could try to increase the timestamp_history_retention_interval_sec
flag to a large value, e.g. 3600.
Hi @harryboot
did you try the suggestions from mbautin ?
can you tell me how to set timestamp_history_retention_interval_sec? where is it?
Hi @harryboot, --timestamp_history_retention_interval_sec
is a command-line flag for yb-master and yb-tserver.
You probably want to set --timestamp_history_retention_interval_sec=120
or so.
thanks a lot . but i have lost my test environment for some reasons.