Replies: 4 comments 2 replies
-
Usually it's caused by heavy write workloads. JuiceFS will retry the transaction as expected. |
Beta Was this translation helpful? Give feedback.
-
Could you please explain what you mean by a 'heavy' workload? Are you referring to high bandwidth usage or a high level of concurrency? In our case, we are writing about 260GB of data into JFS using a single thread. Would this be considered a heavy workload? Could this issue be related to S3 throttling? |
Beta Was this translation helpful? Give feedback.
-
The heavy workload means concurrent access to SQLite. SQLite allow single write transaction at a time (locked), so concurrent metadata transaction of JuiceFS will be retried as normal. |
Beta Was this translation helpful? Give feedback.
-
I see, is there anything we can do about this? Are there any configurations or parameters that could be helpful in this situation? Or should we switch to another type of metadata server? |
Beta Was this translation helpful? Give feedback.
-
What happened:
We were writing some files to the mount point.
A line that says database is locked appeared in the log:
May 18 01:51:16 ip-172-31-4-1 juicefs[6074]: juicefs[6074] : Read transaction succeeded after 4 tries (15.108270368s), last error: database is locked [sql.go:768]
.And the access to the mount point became slow.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
It only happened once, so far we don't know how to reproduce it.
Anything else we need to know?
Environment:
juicefs --version
) or Hadoop Java SDK version:juicefs version 1.1.0+2023-09-04.08c4ae6
aws
cat /etc/os-release
):uname -a
):s3
a sqlite database located at an EBS data volume attached to the instance
Beta Was this translation helpful? Give feedback.
All reactions