Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

'.jsipfs datastore' breaks peers connections / .jsipfs Identity is not the one set by peerId in setup. #4212

Closed
on-meetsys opened this issue Sep 16, 2022 · 6 comments
Assignees
Labels
good first issue Good issue for new contributors kind/unknown-in-helia need/analysis Needs further analysis before proceeding need/maintainer-input Needs input from the current maintainer(s) status/ready Ready to be worked

Comments

@on-meetsys
Copy link

  • Version:
    "ipfs": "^0.64.0"

  • Platform:
    All platforms : Mac/Linux

Severity:

High

Description:

  • A previous .psipfs directory makes ipfs instable.
  • peerId identity in .jsipfs is not the one set in libp2p options. why ?

I launch ipfs on several computers with the following setup :

const myPeerId = await createFromProtobuf(privKey);

// on main_node, behind a box with 4002+4003 ports forwarded
const bootstrap = [];

// on other nodes
// const bootstrap = [
//   '/ip4/main_node_IP/tcp/4002/p2p/main_node_PeerID',
//   '/ip4/main_node_IP/tcp/4002/p2p/main_node_PeerID',
//   '/ip4/main_node_IP/tcp/4003/ws/p2p/main_node_PeerID',
// ];

ipfs = await createIpfs({
    libp2p: {
      peerId: myPeerId,
      pubsub: new GossipSub({
        allowPublishToZeroPeers: true,
        fallbackToFloodsub: true,
        emitSelf: true,
        maxInboundStreams: 64,
        maxOutboundStreams: 128,
        }),
      connectionProtector: new PreSharedKeyConnectionProtector({
        psk: new Uint8Array(Buffer.from(swarmKey, 'base64')),
      }),
      nat: {
        enabled: false,
      },
    },
    config: {
      Bootstrap: bootstrap,
    },
  });

const multiAddrs = await ipfs.swarm.localAddrs();
const main_node_bootstrap = multiAddrs.map((m) => m.toString())

This works fine : all nodes can publish messages via pubsub, can save and load ipfs or dag content from others.

But it works only once. If I stop/launch the same project, I can see peers connect/disconnect/connect/...
Then the pubsub messaging doesn't work anymore (because the subscribe is not automatically done on re-connect I imagine ?), but no ipfs or dag content can be read anymore by nodes.

To make it work again, I have to remove all the ~/.jsipfs directory. Then it works again, untill I stop/run it again.
So, it looks like this saved directory make something wrong... I haven't found what.

I've noticed that the identity written in ~/.jsipfs/config is not the same as myPeerId. I have tried to force it (by editing PeerId and PrivKey) with no success.

If it helps, I have sometimes the following messages :

[87587:0916/155216.592298:ERROR:webrtc_sdp.cc(414)] Failed to parse: "". Reason: Expect line: candidate:<candidate-str>
[87586:0916/155219.604026:ERROR:socket_manager.cc(127)] Failed to resolve address for d5b76e90-06cb-41ac-8eeb-654da146bd28.local., errorcode: -105

or (same code in an electron project)


(node:87927) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
(Use `Electron --trace-warnings ...` to show where the warning was created)

or


[78526:0916/142850.492330:ERROR:dcsctp_transport.cc(447)] DcSctpTransport4->OnAborted(error=PEER_REPORTED, message=User-Initiated Abort, reason=Close called).
[78526:0916/142850.492563:ERROR:rtc_data_channel.cc(632)] DataChannel error: "User-Initiated Abort, reason=Close called", code: 12
[78526:0916/142850.529794:ERROR:sdp_offer_answer.cc(758)] Failed to set remote answer sdp: Called in wrong state: stable
[78526:0916/142850.532033:ERROR:dcsctp_transport.cc(439)] DcSctpTransport6->OnError(error=WRONG_SEQUENCE, message=Can't reset streams as the socket is not connected).

Steps to reproduce the error:

This can be tested with the following repo : https://github.com/on-meetsys/ipfs-node
Launch it on one computer, then run it on anothers with the bootstrap (index.js line 21) set up to the multiaddress of the first computer, each one with its own peerId (line 28).
Everything works : pubsub messaging sends message to get ipfs/dag content store by each node (see logs)
Stop/Run again : it works a few seconds, then peers disconnect, and there is no more ipfs messaging or content sharing.
Remove .jsipfs : Run : everything works again.

The log when it works :

peer :  12D3KooWGBKCk69gHfTgbEGbzouodcX1AR3yh7vYcS7uN9Fsyo5h
peer :  12D3KooWD2ubH9hQUPwdqFhFEopmw3xDCz3bKaBs5fYECsShnrdB
peer :  12D3KooWHFFAHzhX6bGLce7YBmhRYueXUE2cjJmczWfMenRQ6NLV
save ipfs :  QmRg6ZRHkDbCMpcB8EYzb3fw6wfDoKGWsKzLjcisJHnew5
save dag :  bafyreidfvyv7qr3t5nzfpdladhxqyaw2qogovlifzdrw3obfp7vmgxsayu
got dag message : '12D3KooWD2ubH9hQUPwdqFhFEopmw3xDCz3bKaBs5fYECsShnrdB' bafyreibw7axzpviwju7e5lclpiae272hgw5u7s7m7jscgmedwvh546rfjy
got dag message : '12D3KooWD2ubH9hQUPwdqFhFEopmw3xDCz3bKaBs5fYECsShnrdB' bafyreibw7axzpviwju7e5lclpiae272hgw5u7s7m7jscgmedwvh546rfjy
got file message :  12D3KooWD2ubH9hQUPwdqFhFEopmw3xDCz3bKaBs5fYECsShnrdB QmfNJn3aZBMtpbM5LfLjcUFGTSfL7T4RfjEZqTr9n3nVFd
got file message :  12D3KooWD2ubH9hQUPwdqFhFEopmw3xDCz3bKaBs5fYECsShnrdB QmfNJn3aZBMtpbM5LfLjcUFGTSfL7T4RfjEZqTr9n3nVFd
got file message :  12D3KooWGBKCk69gHfTgbEGbzouodcX1AR3yh7vYcS7uN9Fsyo5h QmTLd7i7pzoWTMocv4iZifhMp4xDN2Xnm23ZLnj1rdfzVZ
got dag message : '12D3KooWGBKCk69gHfTgbEGbzouodcX1AR3yh7vYcS7uN9Fsyo5h' bafyreiea5d57tj5hh5thmsu7qb73wbp25utzqjjtof4ifqs6x5izlpneki
got dag :  {
  content: '12D3KooWGBKCk69gHfTgbEGbzouodcX1AR3yh7vYcS7uN9Fsyo5h ipfs dag #43'
}
got file :  12D3KooWGBKCk69gHfTgbEGbzouodcX1AR3yh7vYcS7uN9Fsyo5h ipfs file #43
got dag message : '12D3KooWHFFAHzhX6bGLce7YBmhRYueXUE2cjJmczWfMenRQ6NLV' bafyreica57i44h53t6zacyxirnkq3mhrv6tbqf7v7odhcynfrjlwzu5gmi
got dag :  {
  content: '12D3KooWHFFAHzhX6bGLce7YBmhRYueXUE2cjJmczWfMenRQ6NLV ipfs dag #3'
}

the log after a stop/run :

save ipfs :  QmPkW2TNd1r2Edb1nnKCzkRoj2RnidudWFqoWzFVGUYRri
save dag :  bafyreicfj4e3tyfrlqsn4d66m57uomdx5pzu4larr5tzxfzsczntg4pjbq
peer:connect 12D3KooWRz37wBV356TUh5M1GrNUjqkiCtRpVHej16q6LKz6wN92
peer:disconnect 12D3KooWLxhGcfMNxuq3ZH22d7QWpCjfrwVhAgtTkzWkGvaYHXqZ

save ipfs :  Qmd5PymmVsAdXWthMDbC9EMxTDoRRF9wLXvHgqEqsk5FMa
save dag :  bafyreidyhwf4gdsohqbq3aerdbpru263hhh37ayqztcz3fbaj5hokpcg3u
peer:connect 12D3KooWLxhGcfMNxuq3ZH22d7QWpCjfrwVhAgtTkzWkGvaYHXqZ
save ipfs :  QmcHXwELfopPBM8rBUFsEZVAWg6gzF6arbjZbU8u44XT7s
save dag :  bafyreigtyurewwt2fzmuffszvifwnnaa3fr34jxfnwtv2m4qwit55rvntm
peer:disconnect 12D3KooWRz37wBV356TUh5M1GrNUjqkiCtRpVHej16q6LKz6wN92
save ipfs :  Qmdf12U4iJT5QyBUgGbii6iyfnAY7toP9X9ZQbXj1hLVjJ
save dag :  bafyreifv7ltwvnrdyyfzl5jkozja7h7bmrzxdto74xzwqcevv6f24arci4
peer:connect 12D3KooWRz37wBV356TUh5M1GrNUjqkiCtRpVHej16q6LKz6wN92
peer:disconnect 12D3KooWLxhGcfMNxuq3ZH22d7QWpCjfrwVhAgtTkzWkGvaYHXqZ
@on-meetsys on-meetsys added the need/triage Needs initial labeling and prioritization label Sep 16, 2022
@welcome
Copy link

welcome bot commented Sep 16, 2022

Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment.
Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:

  • "Priority" labels will show how urgent this is for the team.
  • "Status" labels will show if this is ready to be worked on, blocked, or in progress.
  • "Need" labels will indicate if additional input or analysis is required.

Finally, remember to use https://discuss.ipfs.io if you just need general support.

@on-meetsys
Copy link
Author

on-meetsys commented Sep 20, 2022

This problems seems to come from the .jsipfs/datastore. If we stop and remove the .jsipfs/datastore/00000X.log then it works perfectly afterwards.

I have modified the repo creation in https://github.com/on-meetsys/ipfs-node, forcing the datastore to be in memory, and everything works fine ! One can stop/run it endlessly without problem. But this is only a workaround and it may have side effects :

  const repo = createRepo(
    '',
    async () => rawCodec,
    {
      root: new FsDatastore(repoPath, {
        extension: ''
      }),
      blocks: new BlockstoreDatastoreAdapter(
        new ShardingDatastore(
          new FsDatastore(`${repoPath}/blocks`, {
            extension: '.data'
          }),
          new NextToLast(2)
        )
      ),
      datastore: new MemoryDatastore(), // MODIFIED : NO BUG
      keys: new FsDatastore(`${repoPath}/keys`),
      pins: new LevelDatastore(`${repoPath}/pins`)
    },
    { autoMigrate: false, repoLock: MemoryLock, repoOwner: true }
  );

So, it looks like there is something not working into the default datastore reading... I will add comments here if I find what...

@on-meetsys on-meetsys changed the title A previous .jsipfs breaks main ipfs features / .jsipfs Identity is not the one set by peerId in setup. '.jsipfs datastore' breaks peers connections / .jsipfs Identity is not the one set by peerId in setup. Sep 20, 2022
@on-meetsys
Copy link
Author

It looks like the problem is in the LevelDatastore. if I setup the repo with :

datastore: new LevelDatastore(`${repoPath}/datastore`)

the bug is back : peers are disconnected/reconnected without messaging after the second run.

@achingbrain achingbrain added status/ready Ready to be worked good first issue Good issue for new contributors need/analysis Needs further analysis before proceeding and removed need/triage Needs initial labeling and prioritization labels Oct 14, 2022
@tinytb tinytb moved this to Good First Issue in IP JS (PL EngRes) v2 Nov 3, 2022
@tabcat
Copy link
Contributor

tabcat commented Nov 15, 2022

Am able to see pubsub successfully re-peer after deleting the remote peerid from at least one of the PeerStores.

https://gist.github.com/tabcat/8fc26a03a58617ebea0e8ff4d261fcb3#file-test-ipfs-pubsub-js-L31-L32

await ipfs1.libp2p.peerStore.delete(id2.id)

@marcus-pousette
Copy link

marcus-pousette commented Nov 26, 2022

I pose this is a IPFS repo/mortice gclock leak. I wrote an issue here here with patches you can do in the meantime to fix the issue (hopefully). For me it does at least work after the fix.

If you want the patch files I use (with the patch-package lib), see this repo patches folder, specifically the ipfs-repo+16.0.0.patch and mortice+3.0.1.patch

(This assumes you are running the latest version of IPFS (0.65.0)

@SgtPooki SgtPooki added need/maintainer-input Needs input from the current maintainer(s) kind/unknown-in-helia labels May 16, 2023
@SgtPooki
Copy link
Member

SgtPooki commented May 16, 2023

Hello all,

js-ipfs is being deprecated in favor of Helia. You can learn more about this deprecation and the corresponding migration guide here.

As a result, we are going to close this issue. If you think we have done this in error, please feel to reopen with any comments in the next week as we will circle back on the reopened issues.

We hope you will consider Helia for your IPFS in JS needs. If you believe this particular request belongs in Helia, feel free to open a Helia issue. We look forward to engaging with you more there.

Thanks,
@ipfs/helia-dev

@SgtPooki SgtPooki closed this as not planned Won't fix, can't repro, duplicate, stale May 16, 2023
@github-project-automation github-project-automation bot moved this from Good First Issue to Done in IP JS (PL EngRes) v2 May 16, 2023
@SgtPooki SgtPooki self-assigned this May 17, 2023
@SgtPooki SgtPooki moved this to ✅ Done in js-ipfs deprecation May 17, 2023
@SgtPooki SgtPooki moved this from ✅ Done to 🏃‍♀️ In Progress in js-ipfs deprecation May 17, 2023
@SgtPooki SgtPooki moved this from 🏃‍♀️ In Progress to ✅ Done in js-ipfs deprecation May 17, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
good first issue Good issue for new contributors kind/unknown-in-helia need/analysis Needs further analysis before proceeding need/maintainer-input Needs input from the current maintainer(s) status/ready Ready to be worked
Projects
No open projects
Status: Done
Development

No branches or pull requests

5 participants