📅 Original date posted:2022-06-17
📝 Original message:
>
> The scope of these specs is transmitting data over the Lightning
> Network (over HTLC custom records). This is a use-case already used
> by a few projects ([1], [2], [3], [4]), and in this context
> we do not intend to debate the validity of it.
You can't just handwave away whether something is up for debate because a
few people did some proofs-of-concept that pretty much no one actually uses.
The main question here is "why?!" Why shoehorn data transmissions into LN
when you could pair LN payments with any other transmission method?
You could gate downloads, and permissions in packets totally out of band
from the payments. The files could be torrents or any format that is
better-suited for the task. As an example, look at Dazaar tools:
https://github.com/bitfinexcom?q=dazaar-l&type=all
We don't need to put the whole internet & web inside of Lightning.
Lightning is for payments. If you try to use it for broad communication use
cases, you end up crippling both the use case and LN.
--
John Carvalho
CEO, Synonym.to <http://synonym.to/>
On Thu, Jun 16, 2022 at 4:48 PM <
lightning-dev-request at lists.linuxfoundation.org> wrote:
>
> Date: Thu, 16 Jun 2022 18:36:28 +0300
> From: George Tsagkarelis <george.tsagkarelis at gmail.com>
> To: lightning-dev at lists.linuxfoundation.org
> Subject: [Lightning-dev] DataSig -- Data signatures over Lightning
> Message-ID:
> <
> CACRHu9irXQEfLLDdTwZg93QsaZnyPjP71O9w24b1LxVkzNaDKA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> # DataSig -- Data signatures over Lightning
>
> ## Introduction
>
> Greetings, Lightning devs
>
> This mail serves as an introduction to one of the two specs
> that we want to propose to the community.
> The scope of these specs is transmitting data over the Lightning
> Network (over HTLC custom records). This is a use-case already used
> by a few projects ([1], [2], [3], [4]), and in this context
> we do not intend to debate the validity of it.
>
> As mentioned, DataSig is one of the two specs we aim in proposing:
> * DataSig: Concerns the authentication of some data with regards to
> the source and destination of the transmission.
> * DataStruct: Concerns the outer layer of the data structure,
> mainly focusing on the data fragmentation aspect of transmission.
>
> We seek feedback on the two specs as we want to improve and tweak
> them before proceeding with a BLIP proposal.
>
> ## DataSig
>
> This spec's aim is to describe the format of a structure representing
> a signature over some arbitrary data.
>
> Before proceeding, a few clarifications must be made:
> * The DataSig structure is placed inside a custom TLV record
> * DataSig allows the receiving end validate that:
> * Data were authored by the source node
> * Data were meant to be received by the receiving node.
>
> The main scope of DataSig is assisting with data verification
> independently of what medium one chooses for data transmission.
> Nevertheless, for simplicity, in the follow-up DataStruct spec
> we assume the data to be transmitted over custom TLV records as well.
>
> We consider a compact encoding to be used for representing the
> DataSig structure over a TLV, so it is expressed as the following
> protobuf message:
>
> ```protobuf
> message DataSig {
> uint32 version = 1;
> bytes sig = 2;
> bytes senderPK = 3;
> }
> ```
>
> * `version`: The version of DataSig spec used.
> * `sig`: The bytes of the signature.
> * `senderPK`: The sender's public key.
>
> ### Generation
>
> In order to instantiate a DataSig signing the data `D`, one needs
> to follow these steps:
>
> 1. Populate `version` with the version that is going to be used.
> 2. Prepend the desired destination address (`A`) to `D`,
> creating a new byte array (`AD`).
> 3. Sign the byte array `AD`, generating a signature encoded in
> fixed-size LN wire format.
> 4. Populate the `sig` field with the generated signature.
> 5. Populate `senderPK` with own address.
> 6. Encode the resulting DataSig structure to wire format
> (byte array `S`).
>
> ### Verification
>
> Assuming that the destination node has retrieved:
> * The byte array of the data `D`
> * The byte array of the encoded signature struct `S`
>
> The data should be verified against the signature
> by following the below procedure:
>
> 1. Decode bytes `S` according to DataSig protobuf message definition.
> 2. If signature `version` is not supported or unknown, consider data
> to be unsigned.
> 3. Prepend own address (`A`) to byte array `D`, generating the byte
> array `AD`.
> 4. Verify the signature provided in `sig` field against the message
> `AD` and sender public key `senderPK`.
>
> ### Notes / Remarks
>
> * The scope of this spec is to deal with the verification
> of the author and intended recipient of transmitted data.
> We do not intend to solve the issue of associating a DataSig
> to the corresponding data (signed by it), in case they are
> not transmitted in pairs.
> For now, we assume that data and signature are transmitted
> over an HTLC's custom records in pairs.
>
> * You can find a formatted version of this document on
> [hackmd](https://hackmd.io/2pzHLslkRkGytfjKROv3AQ?view).
>
> --------------
>
> [1]: https://sphinx.chat
> [2]: https://github.com/joostjager/whatsat
> [3]: https://github.com/alexbosworth/balanceofsatoshis
> [4]: https://github.com/c13n-io/c13n-go
>
>
> --
> George Tsagkarelis | @GeorgeTsag | c13n.io
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220616/b33f8a43/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Thu, 16 Jun 2022 18:48:26 +0300
> From: George Tsagkarelis <george.tsagkarelis at gmail.com>
> To: lightning-dev at lists.linuxfoundation.org
> Subject: [Lightning-dev] DataStruct -- Data fragmentation over
> Lightning
> Message-ID:
> <
> CACRHu9iOns0gzELrUj8yNkj1NE2baUM0RD6DnN1dDyy5TzV6_w at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> # DataStruct -- Data fragmentation over Lightning
>
> ## Introduction
>
> Greetings once again,
>
> This mail proposes a spec for data fragmentation over custom records,
> allowing for transmission of data exceeding the maximum allowed size
> over a single HTLC.
>
> As in the case of DataSig, we seek feedback as we want to improve
> and tweak this spec before submitting a BLIP version of it.
>
> ## DataStruct
>
> The purpose of this spec is to define a structure that describes
> fragmented data, allowing for transmission over separate HTLCs
> and assisting reassembly on the receiving end.
> The proposed fragmentation structure also allows out-of-order
> reception of fragments.
>
> Since these fragments are assumed to be transmitted over Lightning
> HTLCs, we want to use a compact encoding mechanism, thus we describe
> their structure with protobuf:
>
> ```protobuf
> message DataStruct {
> uint32 version = 1;
> bytes payload = 2;
> optional FragmentInfo fragment = 3;
> }
>
> message FragmentInfo {
> uint64 fragset_id = 1;
> uint32 total_size = 2;
> uint32 offset = 3;
> }
> ```
> * `version`: The version of DataStruct spec used.
> * `payload`: The data carried by this fragment.
> * `fragment`: Fragmentation information, in case of fragmented data.
>
> The `FragmentInfo` fields describe:
> * `fragset_id`: Identifier indicating a fragment set, common to all
> fragments of the same data.
> * `total_size`: The total data size this fragment is part of.
> * `offset`: Starting byte offset of this fragment's `payload`
> in the total data.
>
> If the total data can be transmitted over a single HTLC, then the
> `fragment` field should be omitted.
>
> If the `fragment` field is set on a received DataStruct instance the
> receiving node should wait for the full fragment set to be received
> before reconstruction. For each received fragment of a fragment set
> (as indicated by `fragset_id`), the receiving node should assemble
> the data by inserting each `payload` at the offset indicated by the
> `fragment`'s `offset` field. Once the whole data range has been
> received, a node can safely assume the data has been received in
> full.
>
> ### Sending
>
> In this section we will walk through the procedure of utilizing
> DataStruct in order to transmit some data `D` that have a size of
> 42KB.
>
> It is also important to note that we don't describe an algorithm that
> efficiently and dynamically splits the byte array `D` into an
> optimal set of fragments. A fragment's transmission may fail for
> various reasons (including uncertain channel liquidity, stale routing
> data or route lengths that prohibit meaningful data injection).
> It is the responsibility of the sender to fragment the data and
> transmit the fragments towards the destination. The receiver simply
> receives fragments that will (ideally) completely cover `D`, allowing
> its reconstruction.
>
> In this example, we will assume that the sender will settle for
> splitting the data `D` into 84 fragments of 512B size each.
> This is not optimal as it will probably result in raised transmission
> costs, depending on route length.
>
> A sender intending to transmit the data `D` to another node should:
>
> 1. Split the bytes of `D` into 84 fragments of 512B each.
> 2. Generate an identifier for this data transmission, `Di`.
> 3. For each fragment `f`, a `DataStruct` instance should be created:
> 1. Populate `version` with the spec version followed,
> 2. Populate payload with `f`,
> 3. Populate `fragment` as follows:
> 1. Populate `fragset_id` with `Di`,
> 2. Populate `total_size` with len(`D`),
> 3. Populate `offset` with the fragment's starting byte index.
> 4. Encode the created DataStruct instance, resulting in a byte
> array `DS`.
> 5. Transmit `DS` over the custom records of an HTLC.
> 6. In case of failure, transmission can be retried over a
> different route.
>
> ### Receiving
>
> Continuing the last example, the receiving node can execute the
> following steps for each received fragment `DS` in order to assemble
> the data `D`:
>
> 1. Decode `DS` according to DataStruct definition.
> 2. Check `version` field, and decide whether to proceed or ignore
> the fragment.
> 3. If the received DataStruct instance contains a `fragment` field:
> 1. Retrieve the reconstruction buffer identified by `fragset_id`,
> creating it with size `total_size` if it does not exist.
> 2. Insert `payload` at `offset` to reconstruction buffer.
> 3. Check if reconstruction buffer is complete. If all of the
> body of the reconstruction buffer is filled, the buffer
> contains the total data `D`.
>
> ### Notes / Remarks
>
> * We mention that the encoded DataStruct is placed inside a custom
> TLV record, but do not specify the exact TLV key. This is a spec
> regarding data fragment transmission, and as such should not define
> specific TLV keys to be used.
>
> * Interoperability could be achieved by different applications
> utilizing the same TLV as well as data encoding for transmission.
>
> * A node can send and receive payments that carry data in different
> TLV keys. It is the responsibility of the application to send and
> listen for data over specific TLV keys.
>
> * It is the responsibility of the sender to transmit fragments that
> allow for full data reconstruction on the receiving end.
>
> * Fragments could carry ranges of bytes that overlap (e.g. two
> fragments that cover the range 256-511 (0-511, 256-767)).
>
> * A DataSig could accompany a transmitted DataStruct, allowing the
> receiving node to verify the data source and destination.
>
> * If DataSig is also included with each fragment, the receiver could
> identify reconstruction buffers based not only on `fragset_id` but
> the sender's address as well. This means that a node could
> simultaneously be receiving two different fragment sets with the
> same `fragset_id`, as long as they are originating from different
> nodes.
>
> * It is the responsibility of the sender to properly coordinate
> simultaneous transmissions to a destination node by using different
> `fragset_id` values for each fragment set.
>
> * If the sender uses an AMP payment's HTLCs to carry the different
> fragments, it is not strictly necessary to declare the `total_size`
> of the data. The condition for data reconstruction completion could
> be the success of the AMP payment, unless they want to utilize both
> AMP and single path payments for data transmission (transmit over
> multiple payments possibly with multiple HTLCs on each payment).
>
> * There is a lot of room for optimisations, like signing larger
> chunks of data and not each transmitted fragment. This way you would
> transmit less DataSig instances and leave more available space for
> the fragment data.
>
> - A working proof of concept that utilizes DataSig and DataStruct
> over single path payments can be found here:
> https://github.com/GeorgeTsagk/satoshi-read-write
>
>
> --
> George Tsagkarelis | @GeorgeTsag | c13n.io
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220616/ebb61e73/attachment.html
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
> ------------------------------
>
> End of Lightning-dev Digest, Vol 82, Issue 9
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220617/54423266/attachment-0001.html>