telemaco [ARCHIVE] on Nostr: 📅 Original date posted:2015-10-09 📝 Original message:Hello, I have been working ...
📅 Original date posted:2015-10-09
📝 Original message:Hello,
I have been working on database engineering for many years and there are
some things i don't understand very well about how bitcoin architecture
works. I have not written here because i would not like to disturb
development with yet another of those far to implement ideas that does
not contribute to actual code as sometimes is said here.
On any case today I have been listening the last beyond bitcoin video
about the new bitshares 2.0 and how they are changing the transaction
structure to do it more similar to what relational database management
systems have been doing for 30 years.
Keep a checkpointed state and just carry the new transactions. On rdbms,
anyone if they want to perform historical research or something, they
can just get the transaction log backups and reply every single
transaction since the beginning of history.
Why is bitcoin network replying every single transaction since the
beginning and not start from a closer state. Why is that information
even stored on every core node? Couldn't we just have a checkpointed
state and the new transactions and leave to "historical" nodes or
collectors the backup of all the transactions since the beginning of
history?
Replication rdbms have been working with this model for some time, just
being able to replicate at table, column, index, row or even db level
between many datacenters/continents and already serving the financial
world, banks and exchanges. Their tps is very fast because they only
transfer the smallest number of transactions that nodes decide to be
suscribed to, maybe japan exchange just needs transactional info from
japanese stocks on nasdaq or something similar. But even if they
suscribe to everything, the transactional info is to some extent just a
very small amount of information.
Couldn't we have just a very small transactional system with the fewest
number of working transactions and advancing checkpointed states? We
should be able to have nodes of the size of watches with that structure,
instead of holding everything for ever for all eternity and hope on
moore's law to keep us allowing infinite growth. What if 5 internet
submarine cables get cut on a earth movement or war or there is a
shortage of materials for chip manufacturing and the network moore's law
cannot keep up. Shouldn't performance optimization and capacity planning
go in both ways?. Having a really small working "transaction log" allows
companies to rely some transactional info to little pdas on warehouses,
or just relay a small amount of information to a satellite, not every
single transaction of the company forever.
After all if we could have a very small transactional workload and leave
behind the overload of all the previous transactions, we could have
bitcoin nodes on watches and have an incredibly decentralized system
that nobody can disrupt as the decentralization would be massive. We
could even create a very small odbc, jdbc connector on the bitcoin
client and just let any traditional rdbms system handle the heavy load
and just let bitcoin core rely everyone and his mother to a level that
noone could ever disrupt a very small amount of transactional data.
Just some thoughts. Please don't be very harsh, i am still researching
bitcoin code and my intentions are the best as i cannot be more
passionate about the project.
Thanks,
Published at
2023-06-07 17:43:05Event JSON
{
"id": "67deda50a93ab2898093605572179b0a801c018cbb9ef6be945d6af88b6826d2",
"pubkey": "6245209c1c20201ec605146a47d28ac95f2f487c530981ac57ff8e0aed2943bf",
"created_at": 1686159785,
"kind": 1,
"tags": [
[
"e",
"0d103a32e1268306285015bf21b18f7dd68776635fb70f0f8452814df41837a6",
"",
"reply"
],
[
"p",
"a23dbf6c6cc83e14cc3df4e56cc71845f611908084cfe620e83e40c06ccdd3d0"
]
],
"content": "📅 Original date posted:2015-10-09\n📝 Original message:Hello,\n\nI have been working on database engineering for many years and there are \nsome things i don't understand very well about how bitcoin architecture \nworks. I have not written here because i would not like to disturb \ndevelopment with yet another of those far to implement ideas that does \nnot contribute to actual code as sometimes is said here.\n\nOn any case today I have been listening the last beyond bitcoin video \nabout the new bitshares 2.0 and how they are changing the transaction \nstructure to do it more similar to what relational database management \nsystems have been doing for 30 years.\n\nKeep a checkpointed state and just carry the new transactions. On rdbms, \nanyone if they want to perform historical research or something, they \ncan just get the transaction log backups and reply every single \ntransaction since the beginning of history.\nWhy is bitcoin network replying every single transaction since the \nbeginning and not start from a closer state. Why is that information \neven stored on every core node? Couldn't we just have a checkpointed \nstate and the new transactions and leave to \"historical\" nodes or \ncollectors the backup of all the transactions since the beginning of \nhistory?\n\nReplication rdbms have been working with this model for some time, just \nbeing able to replicate at table, column, index, row or even db level \nbetween many datacenters/continents and already serving the financial \nworld, banks and exchanges. Their tps is very fast because they only \ntransfer the smallest number of transactions that nodes decide to be \nsuscribed to, maybe japan exchange just needs transactional info from \njapanese stocks on nasdaq or something similar. But even if they \nsuscribe to everything, the transactional info is to some extent just a \nvery small amount of information.\n\nCouldn't we have just a very small transactional system with the fewest \nnumber of working transactions and advancing checkpointed states? We \nshould be able to have nodes of the size of watches with that structure, \ninstead of holding everything for ever for all eternity and hope on \nmoore's law to keep us allowing infinite growth. What if 5 internet \nsubmarine cables get cut on a earth movement or war or there is a \nshortage of materials for chip manufacturing and the network moore's law \ncannot keep up. Shouldn't performance optimization and capacity planning \ngo in both ways?. Having a really small working \"transaction log\" allows \ncompanies to rely some transactional info to little pdas on warehouses, \nor just relay a small amount of information to a satellite, not every \nsingle transaction of the company forever.\n\nAfter all if we could have a very small transactional workload and leave \nbehind the overload of all the previous transactions, we could have \nbitcoin nodes on watches and have an incredibly decentralized system \nthat nobody can disrupt as the decentralization would be massive. We \ncould even create a very small odbc, jdbc connector on the bitcoin \nclient and just let any traditional rdbms system handle the heavy load \nand just let bitcoin core rely everyone and his mother to a level that \nnoone could ever disrupt a very small amount of transactional data.\n\nJust some thoughts. Please don't be very harsh, i am still researching \nbitcoin code and my intentions are the best as i cannot be more \npassionate about the project.\n\nThanks,",
"sig": "8a081e9cc9ed563de987549c2e4ff46dfa0c805d89405b88d56643babd52d71b06e2e18e277f7c39ab7de605a48c0535e7bb0984c6b93c74cb9831409dd69495"
}