Hi ,
I am using yugabyte-1.3.0.0, and tried the command cqlsh -e “COPY test.mytable TO ‘mytable.csv’ WITH HEADER = true AND PAGESIZE =10000 AND PAGETIMEOUT = 100;” to take backup. It takes around 7 mins for 2 million record. This is a very long time , is there any other methods to take backup in less time. I think this command will take backup of only cassandra keyspace , how to take backup of data inserted using redis api and posgresql api.
I am also looking for “INCREMENTAL BACKUP” , unable to find any supporting docs.
Hi @Raghunath_Murmu,
You can try using cassandra-unloader for exporting larger tables: Bulk export YSQL | YugabyteDB Docs
For SQL, you can use COPY TO
command for now. We are currently also working on enabling pg_dump
for use with YugaByte and hope to have it ready in the next 2-3 weeks.
Note that these are export-based backups (where the output is readable). For better performance, you can use distributed backups. @bogdan or @ramkumar should be able to provide more information on that.
Regarding incremental backups, that’s on our roadmap but we don’t support it yet.