I am getting this exception for a batch of 50000 inserts (single node cluster, local laptop):
com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
at com.datastax.oss.driver.api.core.DriverTimeoutException.copy(DriverTimeoutException.java:34) ~[java-driver-core-4.6.0-yb-11.jar
The timeout appears to be 2 seconds, because if I reduce my batch size to 40000, then I do not get this timeout and the running times I measure for each such batch is always slightly less than 2 seconds.
I have not configured any timeouts myself, so I suppose there is some default value - how can I change that? 2 seconds appears to be awfully little for data ingestion.
The total running time for data ingestion (if I reduce my batch size enough to avoid the timeout) appears to be pretty much on par with some other databases I tested. But one other thing that I find weird: when I login to the ycqlsh shell, then I will not see any new data being created while logged in - all I ever see is the data that has been present at login.
I saw this article, but thought it didn’t apply to my case. I am seeing a timeout value of 2 seconds, not 10 or 60 as mentioned in this article. Also, in the article it is only explained how to set the timeout for cqlsh, but I need to set it for the Java driver. Also, if that matters, the client (my application) doesn’t read anything, it just INSERTs.
About the cqlsh situation: I am starting cqlsh and do “describe tables;”. Then via my application I am creating new tables. My expectation would be that another “describe tables;” would then show the new tables and I could select from them. But in reality I have to restart cqlsh to see the new tables.
a) which timeout-related configuration values are available? I believe your link answers that, many thanks for that.
b) how to set these values in the Java driver for Yugabyte?