Why Nostr? What is Njump?
2023-06-07 17:53:52
in reply to

Natanael [ARCHIVE] on Nostr: 📅 Original date posted:2016-10-11 📝 Original message:Den 12 okt. 2016 01:33 ...

📅 Original date posted:2016-10-11
📝 Original message:Den 12 okt. 2016 01:33 skrev "John Hardy via bitcoin-dev" <
bitcoin-dev at lists.linuxfoundation.org>:
> Sidechains seem an inevitable tool for scaling. They allow Bitcoins to be
transferred from the main blockchain into external blockchains, of which
there can be any number with radically different approaches.
>
> In current thinking I have encountered, sidechains are isolated from each
other. To move Bitcoin between them would involve a slow transfer back to
the mainchain, and then out again to a different sidechain.
>
> Could we instead create a protocol for addressable blockchains, all using
a shared proof of work, which effectively acts as an Internet of
Blockchains?

More of a treechain / clusterchain, then?

> Instead of transferring Bitcoin into individual sidechains, you move them
into the master sidechain, which I'll call Angel. The Angel blockchain sits
at the top of of a tree of blockchains, each of which can have radically
different properties, but are all able to transfer Bitcoin and data between
each other using a standardised protocol.
>
> Each blockchain has its own address, much like an IP address. The Angel
blockchain acts as a registrar, a public record of every blockchain and its
properties. Creating a blockchain is as simple as getting a
createBlockchain transaction included in an Angel block, with details of
parameters such as block creation time, block size limit, etc. A
decentralised DNS of sorts.
>
> Mining in Angel uses a standardised format, creating hashes which allow
all different blockchains to contribute to the same Angel proof of work.
Miners must hash the address of the blockchain they are mining, and if they
mine a hash of sufficient difficulty for that blockchain they are able to
create a block.
>
> Blockchains can have child blockchains, so a child of Angel might have
the address aa9, and a child of aa9 might have the address aa9:e4d. The
lower down the tree you go, the lower the security, but the lower the
transaction fees. If a miner on a lower level produces a hash of sufficient
difficulty, they can use it on any parents, up to and including the Angel
blockchain, and claim fees on each.
>
> Children always synchronise and follow all parents (and their
reorganisations), and parents are aware of their children. This allows you
to do some pretty cool things with security. If a child tries to withdraw
to a parent after spending on the child (a double spend attempt) this will
be visible instantly, and all child nodes will immediately be able to
broadcast proof of the double spend to parent chain nodes so they do not
mine on those blocks. This effectively means children can inherit a level
of security from their parents without the same PoW difficulty.
>
> There are so many conflicting visions for how to scale Bitcoin. Angel
allows the free market to decide which approaches are successful, and for
complementary blockchains with different use cases, such as privacy, high
transaction volume, and Turing completeness to more seamlessly exist
alongside each other, using Bitcoin as the standard medium of exchange.
>
> I wrote this as a TLDR summary for a (still evolving) idea I had on the
best approach to scale Bitcoin infinitely. I've written more of my thoughts
on the idea at
https://seebitcoin.com/2016/09/introducing-buzz-a-turing-complete-concept-for-scaling-bitcoin-to-infinity-and-beyond/
>
> Does anybody think this would be a better, more efficient way of
implementing sidechains? It allows infinite scaling, and standardisation
allows better pooling of resources.

I've got a similar idea since quite a while back, but I've never really
written it down in full. Here one link:

http://www.metzdowd.com/pipermail/cryptography/2015-January/024338.html

Some thoughts on how to design it;

The basic idea is to compress the validation data maximally, and yet
achieve Turing completeness for an arbitary number of interacting chains,
or "namespaces".

The whole thing is checkpointed and uses Zero-knowledge proofs to enable
secure pruning, making it essentially a rolling blockchain with complete
preservation of history. It grows approximately linearly with
non-deprecated state.

This latest checkpoint's header + the following headers and accompanying
Zero-knowledge proofs would together act as the root for the system.

Having that is all you would need to confirm that any particular piece of
data from the blockchain is correct, given that it comes with enough
metadata to trace it all the way to the root. (Merkle tree hashes, ZKP:s,
etc).

Every chain would be registered under a unique name (the root chain would
essentially just deal with registering chain names + their rules), and
would define its own external API towards other chains, and it would define
its own rules for how its data can be updated and when. Every single
interaction with a chain is done by an atomic program (transaction), and
all sets of validated changes must be conflict-free (especially across
chains). Everything would practically be composed of a hierarchy of
interacting programs.

Every set of programs (transactions) can be transformed into a "diff" on
the blockchain state plus an accompanying Zero-knowledge proof. The proofs
can even be chained, such that groups of users of one chain can create a
proof for their own changes, submit it to some chain coordinator who does
another compressing merge and proof generation, to then send it to the
miners who merges the collective changes for all chains and generates a
proof for the root.

Obviously that validation can get inefficient if many chains interact, as
you can't simply just look for conflicting UTXO:s in programs (unless the
chain designers are *really* smart with their conflict resolution
mechanisms). Either you have to use programmatic locking, very slow block
rates or chose to not guarantee that any particular action has succeeded
(essentially turning validated programs (transactions) into *requests* to
the API up towards the root, to eventually be resolved later with responses
propagated back down, instead of having them be direct changes to the
state).

The latter option requires a lot more interaction by the client to get the
intended behavior in many circumstances. The first two can both kill
performance (nobody wants small programs with a few round-trips to take a
week to execute).

I really do hope it can be resolved effectively. I'm guessing some serious
restrictions on the API:s would be necessary. You would want most programs
to be provably independent (such as not accessing the same resources) to be
able to easily just create a small checkpoint of the global state and
generate a proof for it. Programs simultaneously accessing resources that
don't guarantee commutativity for all actions would likely be to be rate
limited.

Best case scenario: some genius manages to create the equivalent of
Lightning Network (with in-chain arbitration authority assigned to chosen
servers in the chain definitions, and cross-chain negotiation between those
authorities when programs use the API) for processing the programs in near
real-time, and quickly settling on what changes to commit to the root.
Programs would practically need to be designed to be networked
(multi-stage) so that the servers can let them negotiate over their API:s
across all chains, until the server has a complete set of changes without
conflicts to commit to the root.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161012/303bc6d5/attachment-0001.html>;
Author Public Key
npub1798ncudyucap9jzzujjsgufx8tdykm8auzfledjcs6f6wf4ekqvq8lpmjt