📅 Original date posted:2018-02-23
📝 Original message:
Hi Rusty,
> 1. query_short_channel_id
> IMPLEMENTATION: trivial
*thumbs up*
> 2. query_channel_range/reply_channel_range
> IMPLEMENTATION: requires channel index by block number, zlib
For the sake of expediency of deployment, if we add a byte (or two) to
denote the encoding/compression scheme, we can immediately roll out the
vanilla (just list the ID's), then progressively roll out more
context-specific optimized schemes.
> 3. A gossip_timestamp field in `init`
> This is a new field appended to `init`: the negotiation of this feature
bit
> overrides `initial_routing_sync`
As I've brought up before, from my PoV, we can't append any additional
fields to the innit message as it already contains *two* variable sized
fields (and no fixed size fields). Aside from this, it seems that the
`innit` message should be simply for exchange versioning information, which
may govern exactly *which* messages are sent after it. Otherwise, by adding
_additional_ fields to the `innit` message, we paint ourselves in a corner
and can never remove it. Compared to using the `innit` message to set up the
initial session context, where we can safely add other bits to nullify or
remove certain expected messages.
With that said, this should instead be a distinct `chan_update_horizon`
message (or w/e name). If a particular bit is set in the `init` message,
then the next message *both* sides send *must* be `chan_update_horizon`.
Another advantage of making this a distinct message, is that either party
can at any time update this horizon/filter to ensure that they only receive
the *freshest* updates.Otherwise, one can image a very long lived
connections (say weeks) and the remote party keeps sending me very dated
updates (wasting bandwidth) when I only really want the *latest*.
This can incorporate decker's idea about having a high+low timestamp. I
think this is desirable as then for the initial sync case, the receiver can
*precisely* control their "verification load" to ensure they only process a
particular chunk at a time.
Fabrice wrote:
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single message
?
I think this is an excellent idea! It would allow batched requests in
response to a channel range message. I'm not so sure we need to jump
*straight* to compressing everything however.
> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)
Great minds think alike :-)
If we're in rough agreement generally about this initial "kick can"
approach, I'll start implementing some of this in a prototype branch for
lnd. I'm very eager to solve the zombie churn, and initial burst that can be
very hard on light clients.
-- Laolu
On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin <fabrice.drouin at acinq.fr>
wrote:
> On 20 February 2018 at 02:08, Rusty Russell <rusty at rustcorp.com.au> wrote:
> > Hi all,
> >
> > This consumed much of our lightning dev interop call today! But
> > I think we have a way forward, which is in three parts, gated by a new
> > feature bitpair:
>
> We've built a prototype with a new feature bit `channel_range_queries`
> and the following logic:
> When you receive their init message and check their local features
> - if they set `initial_routing_sync` and `channel_range_queries` then
> do nothing (they will send you a `query_channel_range`)
> - if they set `initial_routing_sync` and not `channel_range_queries`
> then send your routing table (as before)
> - if you support `channel_range_queries` then send a
> `query_channel_range` message
>
> This way new and old nodes should be able to understand each other
>
> > 1. query_short_channel_id
> > =========================
> >
> > 1. type: 260 (`query_short_channel_id`)
> > 2. data:
> > * [`32`:`chain_hash`]
> > * [`8`:`short_channel_id`]
>
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single
> message ?
>
> > 1. type: 262 (`reply_channel_range`)
> > 2. data:
> > * [`32`:`chain_hash`]
> > * [`4`:`first_blocknum`]
> > * [`4`:`number_of_blocks`]
> > * [`2`:`len`]
> > * [`len`:`data`]
>
> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)
>
> > Appendix A: Encoding Sizes
> > ==========================
> >
> > I tried various obvious compression schemes, in increasing complexity
> > order (see source below, which takes stdin and spits out stdout):
> >
> > Raw = raw 8-byte stream of ordered channels.
> > gzip -9: gzip -9 of raw.
> > splitgz: all blocknums first, then all txnums, then all outnums,
> then gzip -9
> > delta: CVarInt encoding:
> blocknum_delta,num,num*txnum_delta,num*outnum.
> > deltagz: delta, with gzip -9
> >
> > Corpus 1: LN mainnet dump, 1830 channels.[1]
> >
> > Raw: 14640 bytes
> > gzip -9: 6717 bytes
> > splitgz: 6464 bytes
> > delta: 6624 bytes
> > deltagz: 4171 bytes
> >
> > Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844
> channels.[2]
> >
> > Raw: 6326752 bytes
> > gzip -9: 1861710 bytes
> > splitgz: 964332 bytes
> > delta: 1655255 bytes
> > deltagz: 595469 bytes
> >
> > [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz
> > [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz
> >
>
> Impressive!
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20180224/35572cad/attachment.html>