Thomas Voegtlin [ARCHIVE] on Nostr: π
Original date posted:2014-03-27 π Original message:Le 27/03/2014 12:39, Mike ...
π
Original date posted:2014-03-27
π Original message:Le 27/03/2014 12:39, Mike Hearn a Γ©crit :
> One issue that I have is bandwidth: Electrum (and mycelium) cannot
> watch as many addresses as they want, because this will create too
> much traffic on the servers. (especially when servers send utxo merkle
> proofs for each address, which is not the case yet, but is planned)
>
>
> This is surprising and the first time I've heard about this. Surely your
> constraint is CPU or disk seeks? Addresses are small, I find it hard to
> believe that clients uploading them is a big drain, and mostly addresses
> that are in the lookahead region won't have any hits and so won't result
> in any downloads?
To be honest, I have not carried out a comprehensive examination of
server performance. What I can see is that Electrum servers are often
slowed down when a wallet with a large number (thousands) of addresses
shows up, and this is caused by disk seeks (especially on my slow VPS).
The master branch of electrum-server is also quite wasteful in terms of
CPU, because it uses client threads. I have another branch that uses a
socket poller, but that branch is not widely deployed yet.
I reckon that I might have been a bit too conservative, in setting the
number of unused receiving addresses watched by Electrum clients (until
now, the default "gap limit" has always been 5). The reason is that, if
I increase that number, then there is no way to go back to a smaller
value, because it needs to be compatible with all previously released
versions. However, Electrum servers performance has improved over time,
so I guess it could safely be raised to 20 (see previous post to slush).
In terms of bandwidth, I am referring to my Android version of Electrum.
When it runs on a 3G connection, it sometimes takes up to 1 minute to
synchronize (with a wallet that has hundreds of addresses). However, I
have not checked if this was caused by addresses or block headers.
>
> This constraint is not so important for bloom-filter clients.
>
>
> Bloom filters are a neat way to encode addresses and keys but they don't
> magically let clients save bandwidth. A smaller filter results in less
> upload bandwidth but more download (from the wallets perspective). So
> I'm worried if you think this will be an issue for your clients: I
> haven't investigated bandwidth usage deeply yet, perhaps I should.
>
> FWIW the current bitcoinj HDW alpha preview pre-gens 100 addresses on
> both receive and change branches. But I'm not sure what the right
> setting is.
Heh, may I suggest 20 in the receive branch?
For the change branch, there is no need to watch a large number of
unused addresses, because the wallet should try to fill all the gaps in
the sequence of change.
(Electrum does that. It also watches 3 unused addresses at the end of
that sequence, in order to cope with possible blockchain reorgs causing
gaps. As an extra safety, it also waits for 3 confirmations before using
a new change address, which sometimes results in address reuse, but I
guess a smarter strategy could avoid that).
>
> We also have to consider latency. The simplest implementation from a
> wallets POV is to step through each transaction in the block chain one
> at a time, and each time you see an address that is yours, calculate the
> next ones in the chain. But that would be fantastically slow, so we must
> instead pre-generate a larger lookahead region and request more data in
> one batch. Then you have to recover if that batch ends up using all the
> pre-genned addresses. It's just painful.
>
> My opinion, as far as Electrum is concerned, is that merchant accounts
> should behave differently from regular user accounts: While merchants
> need to generate an unlimited number of receiving addresses, it is also
> acceptable for them to have a slightly more complex wallet recovery
> procedure
>
>
> Maybe. I dislike any distinction between users and merchants though. I
> don't think it's really safe to assume merchants are more sophisticated
> than end users.
well, it depends what we mean by "merchant". I was thinking more of a
website running a script, rather than a brick and mortar ice cream
seller. :)
>
> but also because we want fully automated synchronization between
> different
> instances of a wallet, using only no other source of information than
> the blockchain.
>
>
> I think such synchronization won't be possible as we keep adding
> features, because the block chain cannot sync all the relevant data. For
> instance Electrum already has a label sync feature. Other wallets need
> to compete with that, somehow, so we need to build a way to do
> cross-device wallet sync with non-chain data.
Oh, I was not referring to label sync, but only to the synchronization
of the list of addresses in the wallet. Label sync is an Electrum plugin
that relies on a centralized server. Using a third party server is
acceptable in that case, IMO, because you will not lose your coins if
the server fails.
Published at
2023-06-07 15:16:15Event JSON
{
"id": "e2e6dd718af630f6e382999eee3964cfd64f0c7a1dbe42f9b974bb0cbf440699",
"pubkey": "7a4ba40070e54012212867182c66beef592603fe7c7284b72ffaafce9da20c05",
"created_at": 1686150975,
"kind": 1,
"tags": [
[
"e",
"d9ddea79394f356d989e65ab112610cc90e9929cf3c160da3cfe266aeba22200",
"",
"root"
],
[
"e",
"6465c8bf9d9e8d17907d922afe943afcbfe70ca1b6fedbd8b7aa4771469d8bcf",
"",
"reply"
],
[
"p",
"f2c95df3766562e3b96b79a0254881c59e8639f23987846961cf55412a77f6f2"
]
],
"content": "π
Original date posted:2014-03-27\nπ Original message:Le 27/03/2014 12:39, Mike Hearn a Γ©crit :\n\u003e One issue that I have is bandwidth: Electrum (and mycelium) cannot\n\u003e watch as many addresses as they want, because this will create too\n\u003e much traffic on the servers. (especially when servers send utxo merkle\n\u003e proofs for each address, which is not the case yet, but is planned)\n\u003e\n\u003e\n\u003e This is surprising and the first time I've heard about this. Surely your\n\u003e constraint is CPU or disk seeks? Addresses are small, I find it hard to\n\u003e believe that clients uploading them is a big drain, and mostly addresses\n\u003e that are in the lookahead region won't have any hits and so won't result\n\u003e in any downloads?\n\n\nTo be honest, I have not carried out a comprehensive examination of \nserver performance. What I can see is that Electrum servers are often \nslowed down when a wallet with a large number (thousands) of addresses \nshows up, and this is caused by disk seeks (especially on my slow VPS).\n\nThe master branch of electrum-server is also quite wasteful in terms of \nCPU, because it uses client threads. I have another branch that uses a \nsocket poller, but that branch is not widely deployed yet.\n\nI reckon that I might have been a bit too conservative, in setting the \nnumber of unused receiving addresses watched by Electrum clients (until \nnow, the default \"gap limit\" has always been 5). The reason is that, if \nI increase that number, then there is no way to go back to a smaller \nvalue, because it needs to be compatible with all previously released \nversions. However, Electrum servers performance has improved over time, \nso I guess it could safely be raised to 20 (see previous post to slush).\n\nIn terms of bandwidth, I am referring to my Android version of Electrum. \nWhen it runs on a 3G connection, it sometimes takes up to 1 minute to \nsynchronize (with a wallet that has hundreds of addresses). However, I \nhave not checked if this was caused by addresses or block headers.\n\n\n\n\u003e\n\u003e This constraint is not so important for bloom-filter clients.\n\u003e\n\u003e\n\u003e Bloom filters are a neat way to encode addresses and keys but they don't\n\u003e magically let clients save bandwidth. A smaller filter results in less\n\u003e upload bandwidth but more download (from the wallets perspective). So\n\u003e I'm worried if you think this will be an issue for your clients: I\n\u003e haven't investigated bandwidth usage deeply yet, perhaps I should.\n\u003e\n\u003e FWIW the current bitcoinj HDW alpha preview pre-gens 100 addresses on\n\u003e both receive and change branches. But I'm not sure what the right\n\u003e setting is.\n\n\nHeh, may I suggest 20 in the receive branch?\n\nFor the change branch, there is no need to watch a large number of \nunused addresses, because the wallet should try to fill all the gaps in \nthe sequence of change.\n\n(Electrum does that. It also watches 3 unused addresses at the end of \nthat sequence, in order to cope with possible blockchain reorgs causing \ngaps. As an extra safety, it also waits for 3 confirmations before using \na new change address, which sometimes results in address reuse, but I \nguess a smarter strategy could avoid that).\n\n\n\n\u003e\n\u003e We also have to consider latency. The simplest implementation from a\n\u003e wallets POV is to step through each transaction in the block chain one\n\u003e at a time, and each time you see an address that is yours, calculate the\n\u003e next ones in the chain. But that would be fantastically slow, so we must\n\u003e instead pre-generate a larger lookahead region and request more data in\n\u003e one batch. Then you have to recover if that batch ends up using all the\n\u003e pre-genned addresses. It's just painful.\n\n\n\n\u003e\n\u003e My opinion, as far as Electrum is concerned, is that merchant accounts\n\u003e should behave differently from regular user accounts: While merchants\n\u003e need to generate an unlimited number of receiving addresses, it is also\n\u003e acceptable for them to have a slightly more complex wallet recovery\n\u003e procedure\n\u003e\n\u003e\n\u003e Maybe. I dislike any distinction between users and merchants though. I\n\u003e don't think it's really safe to assume merchants are more sophisticated\n\u003e than end users.\n\nwell, it depends what we mean by \"merchant\". I was thinking more of a \nwebsite running a script, rather than a brick and mortar ice cream \nseller. :)\n\n\n\u003e\n\u003e but also because we want fully automated synchronization between\n\u003e different\n\u003e instances of a wallet, using only no other source of information than\n\u003e the blockchain.\n\u003e\n\u003e\n\u003e I think such synchronization won't be possible as we keep adding\n\u003e features, because the block chain cannot sync all the relevant data. For\n\u003e instance Electrum already has a label sync feature. Other wallets need\n\u003e to compete with that, somehow, so we need to build a way to do\n\u003e cross-device wallet sync with non-chain data.\n\nOh, I was not referring to label sync, but only to the synchronization \nof the list of addresses in the wallet. Label sync is an Electrum plugin \nthat relies on a centralized server. Using a third party server is \nacceptable in that case, IMO, because you will not lose your coins if \nthe server fails.",
"sig": "f87838bf4395770814b821c3e0df7d38624b9cd7882d5f1f104485c18b43c9f0d448933d1bd4ae8bcdf3e34d63bd6d59c810bf774a4094e7b27d32366069b302"
}