IRC meeting summary for 2017-05-18
Overview
Notes / short topics
- The new fee estimator has been merged. Running master now will blow away your old fee estimates. Morcos will try to get an improvement in which makes it more seamless for 0.15.
Main topics
- clientside filtering
- pruned-node serving
- bytes_serialized
clientside filtering
background
To make lightweight SPV clients not having to download the entire content of the blockchain, BIP37 was created. This BIP uses bloomfilters via the peer-to-peer protocol to let full nodes send a small subset of transactions to the SPV client that include all the relevant transactions the SPV client needs for his wallet.
Over time this approach appeared to not be as strong as originally thought.
- The expected privacy in BIP37 as a result of the probabilistic nature of bloom filters ended up being non-excistant.
- As the blockchain grows larger, there’s a significant processing load on the full nodes as they have to process the entire blockchain to serve a syncing SPV client. This also makes it vulnerable to DoS attacks.
- SPV clients can not have strong security expectations, as the merkle path allows them to validate an output was spendable, but it does not prove the output is unspent later on.
There have been many ideas proposed to improve this situation, all with their own trade-offs. Recently Roasbeef has been working on an idea based on golomb coded sets, which are small than bloom filters, but requires more CPU-time.
meeting comments
To start this would be something that nodes would precompute and store. Letting blocks commit to this can be done later on.
Gmaxwell is doubtful about the performance from golomb coding, but he’s interested to see it.
Roasbeef was suggesting two filters, one for super lightweight clients which include outpoints and script data pushes, and another for more sophisticated queries which includes the witness stack, sig script data pushes and transaction IDs. Gmaxwell comments adding witness data has long term implications as in a few years from now we can expect most nodes not to store witness data from more than a year back. Roasbeef clarifies the rationale for including witness data was to allow light clients to efficiently scan for things like stealth addresses. However in practice no one uses it that way, and anyone who has implemented it are scanning via a centralized server. Sipa’s preference would be to just have outpoints and full scriptPubKeys. The only advantage of data pushes over the full scriptPubKey is for bare multisig, however those aren’t really used in practice.
Jonasschnelli adds Kallewoof has a draft implementation on serving filters over the p2p network through bloom filtering
meeting conclusion
- Roasbeef will make a BIP which includes the feedback from the meeting (mailinglist post)
pruned-node serving
background
Currently pruned nodes don’t advertise themself as having any blocks, and as a result they don’t serve any blocks to other peers. As the blockchain size continues to grow it’s likely the amount of pruned nodes will rise in the future.
Non-pruned full nodes advertise themself by NODE_NETWORK
, Jonasschnelli proposed to make a message that advertise pruned nodes that relay and are able to serve the last 144 blocks (1 day worth of blocks), namely NODE_NETWORK_LIMITED
in the 2017-05-27 meeting.
meeting comments
Sipa has some data available which shows the relative depth of each block downloaded from his node, excluding compact blocks. It confirms 144 block is too small and the pruning minimum of 288 is better. Gmaxwell thinks the BIP shouldn’t have any headroom/buffer, just advertise what they actually store. If it turns out a node doesn’t have all the blocks you need you can connect to someone else. It should also just state the amount of blocks which need to be stored, and not make any connection with time, as light clients know how many blocks they are behind after the header sync, independent of time. Gmaxwell adds the BIP should also mention that you can serve headers for the whole chain.
meeting conclusion
- Jonasschnelli will incorporate the feedback into the BIP.
bytes_serialized
background
Currently gettxoutsetinfo
has a field called bytes_serialized
which is based on some theoretical serialization of the UTXO set data to show how large the database is in bytes. However in practice this isn’t a good metric on how much space the UTXO set takes on disk.
meeting comments
Wumpus thinks there should be a neutral way of representing the UTXO size, that doesn’t rely on estimates of a specific database format. He’d be fine with it just being the size of keys and values in a neutral format, not accounting for the levelDB prefix compression.
Changing the format of bytes_serialized
allows for changing the definition.
We should report the actual disk usage as well in gettxoutsetinfo
.
Wumpus thinks it would make sense to rename the field as well.
meeting conclusion
- PR #10195 will remove
bytes_serialized
, Sipa will create a separate PR to add a newdisk_size
andbogosize
to replace it.
High priority review
- Ryanofsky likes more review on #10295 (Move some WalletModel functions into CWallet) as it’s blocking his IPC PRs.
- Jonasschnelli added #10240 (Add HD wallet auto-restore functionality)
Comic relief
Participants
IRC nick | Name/Nym |
---|---|
jonasschnelli | Jonas Schnelli |
sipa | Pieter Wuille |
cfields | Cory Fields |
luke-jr | Luke Dashjr |
kanzure | Bryan Bishop |
gmaxwell | Gregory Maxwell |
instagibbs | Gregory Sanders |
wumpus | Wladimir van der Laan |
morcos | Alex Morcos |
sdaftuar | Suhas Daftuar |
CodeShark | Eric Lombrozo |
roasbeef | Olaoluwa Osuntokun |
jtimon | Jorge Timón |
ryanofsky | Russell Yanofsky |
Disclaimer
This summary was compiled without input from any of the participants in the discussion, so any errors are the fault of the summary author and not the discussion participants.