-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
need a safe way to truncate local datastore #245
Comments
just yesterday i was thinking we needed an |
@jbenet what problems exactly would just deleting the datastore directory cause? |
Depends on whether we store keys in it that we expect for normal operation. — On Fri, Oct 31, 2014 at 11:52 AM, Jeromy Johnson [email protected]
|
this would still be nice to have:
|
this could essentially just be:
|
Revision:
|
Hi, I'm trying to pick this up for hacktoberfest, but I cannot for the life of me figure out how to get the confirmation prompt to fire on the client side. Anyone have any pointers? |
@rascalking It should be possible to do that in a However, I'd like to get some input from @magik6k and @schomatis as we'll need to think about the design a bit. The "just delete every block" method works fine for the Thoughts? Am I being paranoid here? |
Iterating doesn't seem that expensive in Badger since we're basically doing a serial read of big files (the most expensive part would be decoding the headers of each value block) but don't quote me on this, I should run a few test to be sure. Out of ignorance, we can't just delete the Badger DB because it's storing more than just the files that the user added, right? Some metadata? |
Yes. We'd have to either just delete the blocks or migrate everything but the blocks to a new datastore and then delete the old one.
IIRC, badger has behaves really poorly when deleting a ton of data. @magik6k? |
Actually you're right, deleting itself is not the most expensive part (although you'll be creating new delete records for every value stored) but purging the db after deleting everything (which implies a search for every delete key and many compact operations) is very cpu intensive. |
* Rework TestFindPeersQuery Unravel the logic and create a minimal test case that isn't flaky. * Use testing.T.Logf * Skip the original test in short mode * Add comments for the arguments to testFindPeerQuery * Qualify aberrant package name * Use redundant package names * gx import testify 1.3.0.
rm -rf $IPFS_DIR/datastore
will break thingsThe text was updated successfully, but these errors were encountered: