-
Notifications
You must be signed in to change notification settings - Fork 10
IPFS within a scalable blockchain architecture #8
Comments
Very nice monkeypants. We should talk more about your design, in particular the need for notaries to have to run their own service which does the aggregation. My design above builds that into the protocol layer in that miners would aggregate data and thus do not lead to need for seperate service provider servers which do the aggregation services for you. This coupled with a merge-mined altcoin with a bigger OP_RETURN data field (to accomodate for larger hash functions for quantum resistance, aswell as the IPFS URI for the full merkle-DAG for verifications) would be an ideal solution I think, and one where we can potentially collaborate on. |
Current implementation has a single aggregator which depends on a RDBMS. If I continue to scale vertically, I think the practical limits to the number of nodes in a single peg/proof tree is maybe ~1 billion. However, that's just my current implementation. I don't think the notary service consumers would notice if I swapped it out for e.g. a master/slave arrangement (master aggregator writes the peg, slaves pass their peg to the master). That would allow me to scale horisontally (master RDBMS lists slaves, each slave has their own RDBMS), so I don't see a practical limit with this approach. It's simply not optimised for scale at all yet, use of backing service is an expedience. I don't understand your proposal well enough to see how it would work, but do agree a fully decentralised approach would be more elegant. Do you think it's impossible to service the currently specified protocol with a decentralised implementation? Are there changes to the interface/protocol that would make it more amenable to a fully decentralised implementation? I think notary service providers should be able to compete on price and reputation, and so should be free to make pragmatic implementation decisions.
I think that solves a problem I don't have. Current OP_RETURN is 83 bytes. Current Qmhash is smaller. See @jbenet's comment in #4. I do see the risk that IPFS hash will get larger at a faster rate (slowly) than bitcoin OP_RETURN does (maybe never), and eventually a single OP_RETURN will not be large enough for an address. I don't know what the probability of that actually happening, but suspect it's moderately low. The consequence would be that each peg would need to eat more than one OP_RETURN, which is a bit nasty but not a fatal consequence to the scheme. On the flip side, the reason I don't want to use another coin is I believe BitCoin is the most Sybil-resistant consensus product available, and it's serviced by the most efficient proof-market. Our initial implementation was Etherium but it was solving the wrong problem: http://slay-the-bridge-trolls.readthedocs.io/en/latest/showthething.html
Love to :) We can make this ticket a monster epic or you could raise tickets at https://github.org/ausdigital/ausdigital-nry (see also the slack channel on the ausdigital site if that's your thing) I don't have an open source implementation, but am building a commercial one (https://notarizer.io, which depends on https://hipfish.io, both working but still in Alpha at the moment). There is a free-to-use testnet implementation at http://nry.testpoint.io, it only occasionally pegs to testnet because we are still messing with things and nobody is using it much yet. If you need support using the testpoint service raise a ticket at https://github.com/test-point/testpoint-nry The specs are open though (obviously) and pull requests welcome. Note that the AusDigital standards have mandatory requirement for backwards compatibility with a bunch of OASIS B2B standards (relating to existing electronic document exchange standards), so it needs to work in an existing ecosystem (identity providers and so on). It's perhaps not as simple as you might design if you were starting with a clean sheet of paper, but those are the moving parts we have to dance with. |
I'm investigating if its feasible to come up with a design such that IPFS can be used to offload data pertaining to pseudo transactions that need not be on blockchain to provide valuable scalability properties (1 million transactions per minute or more). Not having investigated the game theoretical attacks regarding lack of fees paid by people creating these pseudo transactions but nevertheless I'd like to start brainstorming here about a potential design:
4a) The first step would be that IPFS would store the pseudo transactions in some location that miners are aware of and can pull. There needs to be a way to ensure that once the miner picks it up that he classifies it as "included" and being included in a block. Some other service(s) can also easily check if that pseudo transaction hash is already part of the merkle tree and somehow mark it as "included".
4b) Since the miner would post the merkle tree to IPFS, clients would read and validate as required by business processes depending on the pseudo transactions.
4a is the tricky part I think, can this type of thing be done so that people would send some information to a known spot in IPFS, it would get picked up and tagged by an independent third party?
The text was updated successfully, but these errors were encountered: