These are some notes typed during the Zurich gathering and they are almost entirely the words of others from the event, and usually not those of the typist. There is little or no attribution for who said what and different speakers are not indicated so it all runs together as a single stream. The participants can be found on http://coredev.tech/nextmeeting.html for a list. The typist is responsible for practically all errors and mistakes. Please keep in mind there were 3 full days with multiple parallel conversations going on so not everything could be transcribed.
Why did we organize this event? The idea came up when we were at Scaling Bitcoin in Montreal. There's always nice talks but not much time to really work on stuff and code. So slowly the idea came up that we should have time to sit together and work on stuff and discuss stuff. So the idea is to have 3 days to work on whatever you want or whatever we want, and discuss proposals or strategies in person so that we might have elaboration speak. But I don't think we should do any consensus decisions, it would not be appropriate. We can do proposals but no decisions. I recommend no decisions.
Yes I am aware that most developers prefer to work in their own environments. There is clear scientific fact that working together in person has some value for team structure and productivity and stability. Sometimes we are under lots of pressure from various parties. I think to survive the challenge or benefit from it, we need to have a stable team and have clear communications. My goal, or our goal, could be to improve Core during the next few days in a few direct and indirect ways. Indirectly I mean, someone could say after these few days, what did we do during the three days it wasn't really productive? Sometimes that happens when you sit together, you talk a lot but code productivity is not visible. But we should not underestimate the long-term implications of meetings on team stability and knowing the other side and leading to more clear communications.
I would say it's good if we make significant progress on this group, like merging segwit today. That would be awesome. Don't feel too pressured to make code-based progress. That we're able to sit together in one room and try to work together as a team is the value itself. If we make code progress, or merge pull requests, even better.
There's some fixed schedule, like the lunches and dinners, and the breakfast as well. The rest is up to us what we want to do during this time. Every contributor has his own roadmap in open-source, everyone has their own goals, so we ... it's totally up to us what we want to do, we should probably work on segwit maybe get it done, maybe merge it, no pressure, it would also be productive if we want to discuss stuff, someone could bring up a topic, there's some topics floating around like p2p refactor, the idea I had about talking about p2p encryption. Someone could write down a proposed time slot for something to discuss, on the wall there's a list of ... and then a room and code. And then maybe someone shows up and registers for that particular talk. Or if this is too much, we can sit around and code. But maybe the scheduling leads to a bit more productivity.
Write down what you want to talk about, maybe talk with others to arrange it, and then let's see who's writing after that. As an example, I would like to work on the p2p encryption maybe I can pull in Pieter for that, I'm not sure since everyone wants to talk with Pieter, we should not all work together 20 people on the same thing. We should try to have 5 people in a group at a time. We could try to bring up topics to work on, form groups, then try to work in groups.
Jonas Nick was asked to fuzz test the new sighash function in segwit. So far hasn't found anything.
It was taking 4 seconds to sign. When signing a transaction, we start with a regular transaction then convert into multiple transactions many times. Each conversion just hashes it...
travis-ci bit for caching has been flipped for everyone globally. Push to master, push bitcoin master to your own master, because what happens is that it builds master, it pulls the cache from there for every branch you pull, it pulls the cache from master and tries to use that. If you don't have your own master cache, it... if you push your own branch, it pulls it from master, if the branch has a cache then it pulls it from that. "master" is special", ask travis-ci I guess. Ideally, it would pull from the upstream master branch. I'm not sure what a niche use-case that is. You could argue that we might not want to share our cache with downstream, or the other way around, maybe the downstream doesn't want to use our own cache. This would have to be configurable, and I don't know if they want to add that. The fact that they special-case "master" is a little odd, since github doesn't even do that.
issue 5669 - hostname problem... travis-ci support might be dragging their feet now that we've been switching off of the paid plan.
Maybe a machine is passing the whitelist, and sometimes it's not passing the whitelist. What has to be whitelisted? OSX SDK is not redistributable. So it's a best-effort don't let anyone randomly grab it from our site, but our travis-ci machines will presumably pass. If they change the names of their machines or something... They could add a token to their interface. They had that.
We still have a broken constructor MSVC workaround. We have that in some of the stream stuff. We have some conditionals about many versions of MSVC. There was something in the .. vector operator or string.. vector equality or vector some overload. MSVC default copy constructor for standard allocator.
Maybe we could try to get Apple to make that SDK available. We might not use anything from that file. We basically need the stubs and headers, it's the Oracle v. Google argument. We could get involved in that. Maybe we could go back to Judge Alsup and figure out if this qualifies as "fair use".
verify-commits.sh -- in the doc files repo for the cubes/qubes stuff. It's rather critical. Qubes OS. Master dotfile repo that gets copied around to various VMs. Yes the key needs to be in your local ring in order for the verify-commits script to pass signed commits. We could commit a temporary pgp keyring. Using your own keyring lets you verify the keys separately.
We should only have nsequence id for tips. It's changing an invariant. We could do that as a separate refactor change and then this becomes easy. Because it may have different uh.... (trailing off)
You only want the sequence for tips? That's not exactly right. No, it is. Anything but the nsequence of the tip doesn't matter. We try to converge to is the among the highest longest valid ones, the ones with the lowest nsequence... invalidate block and every... you normally validate it when you connect it, so. So your idea would be that when you... when a block ceases to be the tip, you get rid of the sequence? There's different options. Either we say the sequence value is irrelevant for anything not a tip, which means you remove the check in CheckBlockIndex to only enforce the rule when uh... yeah. That would be one way. And also that means when you switch to a new tip, you can reset the counter to zero so that you don't get any overflows unless you have 2 billion competing tips which is probably a sign of another problem. And what else? An alternative is that there's some separate map from block index, used to get the sequence number only maintained for the tips. Yeah, alright. It may actually be ... we could replace, set the index candidates where the value is the nsequence value because I think it's for exactly those that we need it. I thought ... we insert things... we insert, then we prune, then the only thing that matters is the sequence. But that's a bigger change. That's probably memory wise the smallest, who cares about 4 bytes per block in the index? So that's 1.6 megabytes. For mapblockindex. We could get rid of it and have a block index, we could save 1.6 megabytes there, alright.
There's git config for github for pulling all the pull requests. There's the ?w trick. Any url in github ... it's the whitespace diff. They even have different diff view options, there should be a checkbox there for whitespace. Wladimir suggested this to email@example.com but that's still not implemented in their UI. It used to be much more restricted. You couldn't even extend the range at which you were looking. So maybe they will add this eventually, since they updated the diff functionality at least once before based on requests.
What about doing the check blocks before the rewind? Not at level 4, but for the other levels. So how about 4 is skipped when you know you are in that state? Yes it's ugly. You have to pass into checkblocks that you're in an inconsistent state. You end up in a state where your tip doesn't have any recent blocks at all. So we're violating the constraint that we haven't pruned, that one day worth of blocks always remains. Well, two days. I don't know the motivation behind checkblocks. For pruning nodes, we should never go past that data point, then if we're willing to do that the problem is substantially simpler. I have not much evidence of it actually catching that many errors. It caught the version corruption. It still exists. Right now, transaction versions are a signed value and they get serialized into the UTXO set using the u-encoder that is part of the coins database stuff. If you send in a number negative transaction version number, the value stored in the UTXO database is wrong. We don't use that for anything, it was put into the database because maybe it would be used for consensus rules in the future. And maybe a UTXO version number... there's a spending one, but not a creating one. The transaction creating the output is never relevant. And it seems unlikely that we would use that ever in a consensus rule.
You wouldn't want to use it as a consensus rule. It's wrong, and it's storing incorrect data, and the checks actually found that. We solved the problem by bypassing that test. What was the argument for not wanting to use it? When would the semantics of how a UTXOut behaves when you spend it, because of the transaction version of the transaction that created it? It's a layer violation in that you have a global flag of a transaction that changes the semantics of its outputs when spent later. So either it makes sense to put a version number in a particular output (like segwit does), but not in the created transaction, because then you're able to mix things. Segwit puts a version in the output, which makes sense.
There wasn't a reason to do this when it was done, just why not. Taking it out would save like 5 megabytes out of the UTXO set size. It's not that it hurts having it, it hurts right now in the sense that it's wrong. Could we fix it? The reason why it is not fixed is because ... if we fix it, we need to fix everyone's database. In fact, there might be no way to do that than forcing a full reindex. There are a small number of transactions in the blockchain with negative nversion numbers. You can't read the version number and fix in place. If we know about all of them in the fixup script, sure. There's data lost.
If you serialize and read back, what do you get? No clue. Not the right thing. Otherwise we could say that for everything in the future we're going to collapse them. It depends on the implementation specifics of the varint encoder or something.
So in checkblocks, what about a change to just not run checkblocks, if you're pruning, stop complaining about data at that point. Sure. It's independent of segwit. And then that makes the change in segwit easier. We can backoff the intensity of checkblocks in a number of different ways. If you combine rewind with pruning, you will end up with no data. If you update after activation, basically you have a two day window to update. If you wait a couple weeks, you're hosed. The alternative is that you download the blockchain anyway. It's not a security reduction compared to the known pruned case where, not sure if people are aware of this, when you upgrade a node to add segwit support the node that was previously running with non-segwit code after segwit activated, and then it accepts a segwit-invalid-chain during that time, it won't know that it's invalid. That's the historic behavior with soft-forks as well in the past. But you were talking about rewinds? It's always been the case. The difference is the extra data. It violates the constraint that we think we have blocks once we have added them to the database. You might serve blocks that are invalid to your peers. And secondary, you want people to be validating which is a separate part. You need to obtain the extra data. You bring up a node, it's pruned, it's past the horizon, you can't reorg back to the point where segwit started, you need to get rid of the blocks you have so that you don't serve them up, rewind will do that. There's weird edge cases here that we never got to before. This will be an interesting stress testing for the pruning code. That's cute that we just have to get rid of the blocks. UTXO is right, that's what's important. That's ugly. That you can end up with no blocks in pruning mode? Yeah. It's consistent with non-pruning behavior in terms of security. It's just fragile. If you remembered, if we just added a flag that said what ... what script verify flags a block was checked with in the block index, then when you start up with your new block index loading rules saying it was checked with these other flags, it could just reorg back to that point and apply forward. Segwit does this, but for have data not for verify flags. If you are pruning, it is no longer possible to do that, because you can't go back further than your prune horizon. If not for .., we could download blocks and go back further.
Is there any reason that you would have to rescan the wallet? I haven't thought about how the wallet interacts with segwit. There's another issue here for find fork and global index. You may end up with a wallet where the CWalletTransaction does not contain the actual signatures. In our current wallet, there might not be anything that uses. The wallet locator could be ahead of the tip. It shouldn't be a problem, because it should be a descendant of your tip, but that's not how it works because of how locators work with find global index will walk back before your tip and it will try to scan from a point that you don't have data. I fixed this with a simple change where you check in both directions. If chain active has the locator thing you are looking up, then return it. Or the other way around. Well, that sounds wrong. What if you are actually on a fork, and a fork after the last thing that is in your locator, but before the branching point with your chain tip. Your code will say it's good but we're ont. If the locator descends from your tip, the lo.... you go from the latest.... you walk the locator from latest to earliest, so when you encounter something that... Is there a better way to fix this? We should apply the locator to index best headers and find the branch point there. This is always a better estimator than doing it with chain active. Chain active is always an ancestor of index best headers. Actually I'm not sure that's true.
You find the first entry in the locator that is in your block index. Forget best. And this will always be... the code does this. It looks it up in mapblockindex. If you know about it, then you have chain active contains that entry, return it. I added the reverse condition as well. Return the tip if that chain's... eh I think I need to see it. There's a pull request.
I didn't want to ask about segwit wallet behavior. The segwit patch includes wallet support, right? Yes. You have a wallet, it's running on a node, it doesn't have segwit, another copy of your wallet runs on a segwit-running node. If you upgrade the old wallet because you noticed the transactions were invisible. If that other node was pruned, it was possible to hit that. So in that situation you would miss transactions. You deserve both pieces at that point. You're running a wallet, two copies of the same wallet. I could see it with a backup. But yes it requires you to mix a segwit-running node with a non-segwit running node. One option would be to put a big warning label, if you're running a pruning node segwit is different and you might want to upgrade before activation. If you didn't, then you might want to reindex. We have reindex chainstate by the way. There's a reindex chain state. It wouldn't work now because you have to download the blocks anyway.
We did get the reindex to be much faster now, so that's good. And synchronous. Yeah there's no helping that corner case, the only answer is docs and reindexing.
Whenever you're using a wallet, and you upgrade post-soft-fork, and then you do listtransactions or getrawtransaction, it will not contain this witness for anything that pays to you even if it was addresses in your wallet beforehand. Is this a problem? It's not usually data you need to... how would you have non-segwit... there would be non-segwit address. Where do we expose the witness? In getrawtransaction. It's not something you should be looking at, but I'm sure some people are, to try to figure out who's paying them. Oh well, screw em.
If you run the same wallet on two hosts at the same time, bad things happen in general. If you're generating address on another machine, you have to import them. You set your key pool to some really large value. You initially set your key pool to a million addresses, then you move your wallet around, it works almost perfectly when you do that, but it's not a supported config. This will become more of a problem when we merge bip32 stuff, because it will become almost completely correct to run wallets in multiple places. But it still wont work right, because you might still try to spend the same coins at the same time. If you generated the keys in both places, yeah but what if both wallets spend. In segwit, sure that will work fine, but if you do this upgrade thing. Same key but different address, it won't be scanning for them. It's no different than any other wallet duplication. Whenever you create a new address in a new place, you import it into another, you try to import it into a non-segwit wallet. You could even generate the wallet on the segwit host with segwit addresses, copy it into a non-segwit host... the wallet will probably be imported just fine. The wallet stores where it last scanned to. When you upgrade to non-segwit, when we do bip9, it wont go back before the pruning horizon. It will try to fix it. But it might not be able to go back far enough, because of pruning.
We should change the functions-- so that they wont detect bare pubkeys to the same address as... bare pubkeys are used. They should be separate. petertodd used it in one of his example scripts. We shouldn't be scanning for bare pubkeys for everyone in the wallet, only the ones imported as bare pubkeys in the wallet. There should be a way to import a key that imports as a bare pubkey. We do basically support bare pubkeys in the wallet, we just don't generate them. It's ugly. Whenever you have a key, we will treat anything that pays via P2PKH or P2PK to that key as is "mine". None of it has any flags how it's limited, so... [....] We didn't repeat that mistake with P2SH.
We might want to soft-disable the wallet for segwit in the initial release. We need to have the wallet to test it. Right now it's still manual. We should potentially hide that. Because basically even if ... it also risks users associating this behavior with segwit. Had to write out a procedure for miners to testing segnet, there needs to be a switch. We could also just, a switch could change the wallet behavior for getnewaddress now returns a segwit address, which is what we should deploy long-term anyway. We don't want RPC changing based on network conditions. There are mining pools that cannot pay to P2SH-segwit addresses. We should default to generating segwit addresses once segwit is deployed and running pretty well. We should figure out what the native formats are going to be. Separate from the native formats, regardless of where those guys, we should be handing out segwit addresses once it's well deployed. P2SH-segwit addresses. We can accomplish this by having a configure flag that says "use segwit addresses" and it defaults to 0 unless you're in segnet or segnet+testnet, defaulting to 1, and then we change the default to 1 in the next release. Or we can have "getnewaddress" with passing flags. Maybe a pubkey directly... that's going to make integration harder. If you go to deploy, say in 13.1, segwit is well-deployed, we generate segwit adresses as the default type, if you don't like that, then use a config option to usesegwitaddresses=0 or something. Tie it to wallet version? yeah it could be tied to that... We have to handle change as well. Change should always be segwit addresses, we can do "change to segwit address" which we can do aggressively. If you look at the different types of output you have, randomly pick one of them, and mimic their type in the change. P2SH isn't enough, it has to be P2SH-segwit... P2SH is still better than, it's slightly better than... It will get better with Schnorr.
nickler is testing segnet chainparams with 0.9 to make sure segwit is still soft-fork compatible.
In terms of address type stuff, as Pieter was saying he wants to talk about address type stuff. Rusty wrote a blog post a while ago on native segwit address type which consists of a prefix colon and then a base32 encoding of just the scriptpubkey directly. There's some issues with integrating that into a wallet because you would only want to use it for recognized cases. Like say we have an address type that encodes any possible scriptpubkey up to a certain limit in size. Now you do a listtransactions and suddenly all your old transactions, oh they also match this new type -- and then it will be listed like that-- that's not what you want. This is the default for everything we don't have an alternative for, or you only do it for a whitelist of the subset. But that's independent of the encoding. That's more of a UI issue. You check the particular case. And then the question is, what do you do for the checksum? After trying many different possibilities and grinding several I guess weeks of CPU time on this, we found a code that adds 6 characters for any random error it has at least a chance of 1 in a billion to fail at most. It will detect up to 4 transpositions or substitution errors for sure 100%. It doesn't fix anything though-- it can fix 2 transpositions. Stop. No.
In the BIP we would tell people loudly to not fix errors. But the capacity to fix errors means we can make a UI that could point to where the errors are. You would type in an address, it would tell you which spots to look at. If you tell them that it's wrong, then they will go figure it out on their own. The ability to correct errors, the strongest use for it is that it's a strong guarantee that it can also detect it. Using an error correction code as a design criteria, it gives you the properties you want. Whether a UI should make use of it, it's an orthogonal question. And yes there are many suggestions that could be done here including things like, detecting an error and knowing what the error is, but telling the user to look at two other spots to make sure they don't just try things randomly. You definitely don't want a UI that does something like "are you sure this 3 doesn't have to be a 5?" and then they put in a 5 there. That would be horrible. The important property of these is that we guarantee detection of errors that users are likely to make. It guarantees detecting 4 substitutions, 4 transpositions, a substitution, etc. There's a one in 1 billion chance, but outside of those cases. 1 in 1 billion is probably. Right now it's 1 in 1 billion, but no guarantees for any particular errors. 1 in 4 billion is distributed such that common errors by humans are that chance, unfortunately. The error correction coding which sipa has been working on is like 4 lines of code. The error detection recovery code is like 500 lines. Just encoding and checking is like 5 lines of code. What is the length properties? It's longer, but not much. For a pay-to-witness pubkey hash, we have a scriptpubkey of 22 bytes. 22*8/5 .. 36 plus.. it's of course 42 characters. Most of the length change comes from that it is encoding 256 bit data. P2WSH-- those would be 55 characters. Before it was 5.8 bits or something... that's probably the largest downside is the length.
When used in a spoken context, base32 is about 4x faster. It's also about half the size when put into a QR code. QR codes are not efficient for base58. There are digits, alphanumerics, bytes or kanji. base58 is stored as bytes, but base32 can be stored as alphanumeric. It's upper-case only alphanumeric. So you would have to use uppercase base32. But you want to display a lowercase address text. I don't want my key yelling at me, to be fair. But maybe that would prevent some people from copy-pasting this. Maybe we should encode this as emoji.
One of the advantages of having error-protected addresses of any kind, it's not just human error in transcription but also memory error in computers because bitcoin transactions are irreversible. Prudent wallet software should be creating a transaction and then signing it, and then checking the input. If your address that you put in is error-protected in anyway, then it's a strong test that funds are going to the right location. Unfortunately that doesn't protect the amounts because they are not under the address error correction. The amount could be embedded in the address and the amount could be under the address error protection. The full thing would be checkable by wallets. You could also add a signature on top of that, but it's overkill at that point. Buying anything online, you always have to copy-paste two things, you might drop a digit, or you might swap the fee with the value. In the scheme that Rusty described, you have a prefix and then an address, so you could have btc:A:amount, or something, the address would be slightly longer with the address, it would apply the check to the amount. That might be something that we might want to move forward with as well.
I like the prefix idea a lot. It doesn't make sense to have a version byte. How would the transition look like for base58 address types to this other type? It will take years. We still have uncompressed keys. P2SH took so long. And P2SH is multisig, it's not seen as a general case, it's seen as multisig by most people. Multisig is seen as somewhat sexy, it just took a long time to get parties to update. Did we track the adoption of P2SH?
You feed it an array of base32, it computes the checksum, ... ... It could be part of the spec. Even though it encodes an arbitrary scriptpubkey, it has a maximum size. Nobody is going to implement what the spec says, though. There's an implicit one in the protocol, and it's 10,000 bytes. Pay-to-witness scripthash address, and pay-to-witness pubkeyhash format, but the format actually just encodes a scriptpubkey but it's just the standard for those two but not for arbitrary scriptpubkeys which I think would be dangerous. Other people can later extend it, it becomes easier to extend because all you have to do is remove a limitation. The risk is that what people will implement oh it's a scriptpubkey. If you limit it that much, one of the reasons to have something encoding an arbitrary scriptpubkey, so if in the network we deploy script versiontype 5 with just MAST or something, it will take forever to upgrade the entire industry. We don't want to create that problem. We're not going to want to do this again. If we have a new thing again, like lightning, changing an address type is the most difficult thing. Address type being extensible, lightning.... confidential transactions is a good example-- no I don't think that's a good example either, so what's a good example is MAST. You would have to extend for blinding. CT is not a good example because, the payer will need additional processing infrastructure, I can't create a new address type now that is generic for CT. But for MAST I can encode the raw scriptpubkey. Right. It's unrealistic to say we'll never have to have new address types in the future. We can include a lua interpreter or something. It's a new address. You give someone, when you want them to pay you, you give them a virtual agent who acts on your behalf to negotiate the protocol for you, it can barter and stuff.... <humor>petertodd's on it. Whenever you want someone to pay you, you just send petertodd. P2PT - pay-to-petertodd.</humor>
We can't hope to cover all possible future cases, but we can cover a lot of stuff like a lot of new segwit descriptors, but we can't cover stealth addresses or CT. We should cover the things that can be covered and not design further. Being able to put the amounts under it would be super useful. With a different prefix, it would have to be optional yeah with a different prefix. You can only store 288 bits of data, so change to a different code or something.....
XOR the genesis blockhash into the checksum for the new address type. I think jtimon came up with that idea. It's likely that the prefix will at some point collide with some other software system that uses the same prefix. So it's a prefix plus base32 data, so it's recognizable, and then the prefix can be something like BTC or whatever. But short two-three character strings, if htis gets adopted by many software systems maybe not just bitcoin but maybe tor or something, and they like the same thing, it's not unreasonable that there will be a collision on the prefix. To make sure there's no confusion, you can XOR in a 32 bit context ID into the checksum without breaking its properties. What you do in bitcoin, you take the lower 30 bits from the genesis blockhash so whenever you refer to another chain, your code could never be valid. The higher ones are all zero. That was for compatibility, sure. P2SH problem with litecoin, don't know if you seen this, ... sending bitcoin to a litecoin address.
Using the same pubkeys in bitcoin and litecoin; some people have sent to coinbase.com keys... and Coinbase had to write a blogpost about this saying no we're not going to return your litecoins, because our HSM and yadda yadda.
Litecoin never deployed P2SH. They just rebased on top of bitcoin. At least one altcoin has taken the bitcoin genesis block, but it was attacked immediately because someone inserted the bitcoin blockchain. They changed the p2p peer port, but then someone force-connected to it "and your altcoin is now bitcoin". Consensus worked, right? There are many altcoins that have copied litecoin's genesis block. Feathercoin was one of them. They were too lazy to change checkpoints? No this was before checkpoints. They used checkpoints to prevent reorging back to litecoin. That's effectively the same thing as choosing a different genesis block.
The first upgrade with segwit could include Schnorr signatures, and also the new address type. But this will not be included in the initial deployment of segwit. If it's a long time, ... they are orthogonal. Announcing them at the same time doesn't mean that you can only use it after the new... the addresses are not a consensus thing, but it's harder to upgrade. If there's nothing out there for a long time, it might be like bip38 where people just make up their own standards. We need to talk about this. Rusty's blog post was a sign of the fact that he felt this was a problem. I'm sure that he won't be the only person. Tadge says he uses base58 and it starts with a "k". It's so much easier to just have something that fits with everything else. If I'm doing that, other developers will be like, I want something that I can .. to those addresses. Initially there was bip143, anyway.... I would say, faster is probably in some ways safer in that if there's sort of a void for a while then people will.
bip should include encoding and decoding code in the 5 most popular languages. It should be in the most popular languages so that people can copy/paste. This will be much easier than base58. A lot of people struggled with implementing base58check stuff, especially because they would go and look at it.
It was scope creep in segwit to figure out the serialization structuring. In segwit also, it has implications for fraud proofs but that's a semi-discredited idea anyway. Do you have a proof that fraud proofs are discredited? I wrote an article on that, but it turned into a 20 page blog post that I haven't finished yet. And one of the big cases where reifying the transaction hash would be most valuable, is not needed for segwit in any case. Didn't we fix the input value thing? It signs input values? That would be the big gain, using it to get input values and get input value proofs small but you don't need those for segwit for some cases. It's the 99% solution.
What about merkle proofs for the witnesses? Yeah we should do those. What's the timing on that? What's a good point to do that? I don't think it's hard. You need the witnesses in your wallet. I don't think it's hard. On bip37, I was asking if people need to change the code where we scan, right now we scan scriptsigs for stuff, do we need to scan witnesses for stuff? I think the general conclusion on this, unrelated to segwit, inclusion of signatures in bip37 was a massive attack. Are people using it? As far as I can tell, nobody has been using that in bloom when I looked a few months ago. It causes massive false positive rates in the filters. A call should be made, we should ask if other people are using that. I wasn't able to find anyone using that. bip37 bloom filters scan the signatures and they scan the pushes in the signatures as well. We should make sure that they don't. I don't know whether, in segwit right now, the bip37 code, I don't know what they do with witnesses. They don't scan them. It just looks at scriptsigs. So should we send an email to the dev list and ask if anyone expects this to work? I expect that if we get reply, it will be people cheering that the behavior is being improved by not scanning the witnesses. Maybe some wallet authors might be doing the wrong thing right now? I don't think there will be any response, because I don't think most people have realized that this bip37 signature scanning is screwing them.
This should be an update to bip144 then. It should be an intentional change to say that bip37 should not look at witnesses. Do you want OP_RETURN or do you want OP_DROP? When I made the 4 meg blocks on segnet, it was just OP_DROP and stuff. We can use OP_RETURN but it costs 4x as much, so just do OP_DROP before our pubkeyhash and then they put their junk data in the witness part. Is that better? is that worse? I would prefer the junk data to be in the witness. We have a policy that prevents more than 80 bytes in an output, and there's no policy for preventing anything in witness data right now. That policy is not very effective anyway. Given the outcry about the size about OP_RETURN, I would say the policy is effective until people figure out how to bypass it. As far as why it's preferable to have it in the witness, the only real argument I can give is that if there are lite clients in the future that don't need the signatures on the transactions, which I think is likely, and using them to decide who signed in multisig well that's kind of a special use case. It means that, there's much less data that a lite client will need to download if there's any junk data in the block in witnesses instead. Even with a false positive in there, it would skip over the junk data. If you want filtering of stuff in the witness, that sounds like an extra filtering thing in general. Lite clients generally can't validate signatures, they don't have the UTXO set, they can validate signatures but not the UTXO. They don't have the pubkey. And they don't need to, their security model doesn't rely on that. That's a massive... For example, there was this post to the mailing list a few weeks back with a proposal that sort of inverts how bloom filtering works, where blocks have commitments to a map. And when you get a hit, you just go and download a whole block. And having junk data in the witness is good in that case because you wouldn't download the witness for the whole block. In the case where you want the junk data, you can go get it from the witness data. The payment code stuff is poorly designed right now, but maybe it could use something to search for your payment code in an output. If that data is moved into the witness, if you do that, you lose the privacy advantages. Just for the locator thing? Yeah. The people who are using this weird OP_RETURN stuff, if you move to OP_DROP or to the witness, how to make it easier for them. Many or any of them don't seem to be using bip37 filtering to find their stuff. It's been described, maybe one of them is, generally they aren't. Usually they don't even use the things they put into the blockchain, it's write-only. Even if there was something that people were scribbling into a witness that we want bip37 to trigger on, well maybe there should bed a flag to tell bip37 to scan for that, and turn that on only when you want it, because it's a special case.
bitcoinj wallets currently rely on pushing the outpoints which also happens automatically. Which also contributes a ton to the tainting of the filters. ... Schnorr multisig that lets you use the size 1, is not often an improvement for all applications, because you often want to know who signed. So it sounds like you need a way to get the witnesses for a block. I would like to check for commitments, but I would still like to check signatures. I don't know if the one on the blockchain has the same signature. It would be easy to give you the commitment path for it as well. That code is pretty generic. Witness txid contains the normal txid means you only need to check one tree.
We should discuss policy for just for witness. What's the current behavior? There is none. We need to have that in order to prevent people from doing the parasitic thing. There should be a... the same rules as normal applies. You can't have a transaction whose virtual size is now above a certain limit. So, alright. Say I'm a node relaying to the network, a segwit transaction arrives at my door step, I take its signature, I push BLAH BLAH BLAH DROP. That won’t work, because segwit script validation implicitly has cleanstack. BLAH BLAH BLAH DROP. No you can't do that, it's only pushes in the script.
In the docs, with checkmultisig, you don't put 0 on the first witness item, you just put a new length. You know how with.. you have an empty witness item, not a witness item with a zero. It's not clear in the docs. That's what OP_0 does, it pushes the empty string, the name "0" is misleading. The docs should be clarified. It would make sense to update the docs to that 0 is encoded as an empty byte array. It pushes an empty byte array. It's possible to serialize an explicit 0, but we don't do that anywhere. But it violates minimal encoding or whatever. If you take the script and directly put it in, ... are there any other ways you can take witness and modify it to be longer but still be valid? I think not. But I'm very uncertain. This is the fixing malleability subset about it that we were never quite confident we had answered all those questions.
If your script starts with OP_DROP, then, you're ... right? If you don't do that intentionally, OP_DROP with an empty stack is a NOP? No, it's a failure. It's a stack exception or underflow. If your script starts with an OP_DROP, you send a transaction with an empty length thing, and then someone sticks a megabyte of garbage there. It wont relay because the fee is too low. You can't... if you had many drops, they could put that megabyte of junk. Currently there is a consensus limit that the stack items for execution are limited to 520 bytes which is what they could be before... this is not a restriction on the length of the witness script. This only applies in the currently defined versions, good. But we may want to put a policy that is smaller because there's no use for anything larger than 72 bytes currently. It might make sense to put an 80 byte policy limit. There's a script that publishes arbitrarily padded whitespace text into the blockchain, that's why it's whitespace padded. In spite of showing all this wonderful stuff, they just whine about OP_RETURN not being big enough, they have never done any research or whatever.
One reason to keep using OP_RETURN is because some websites use OP_RETURN to txid mapping. They are using websockets to that for their web API. These people are probably going to keep doing that.
We need to cache the wtxid but we might not need a lookup. In a post-segwit plan, we just communicate the short ID key. Looking up based on short ID, it picks transactions from a discount. We iterate the mempool. We benchmarked this. 50 CPU cycles. 5000 go to iterating the mempool per entry. For currently small mempools, it's a few milliseconds, which is fine.
There have been some PDF 0-day exploits targeting Blockstream people and Bitcoin Core people. Patrick also got some. It was from mega.nz. Qubes uses a new VM for every app, including networking, disks, and reading PDFs. Actual VM, using xen. Xserver in its own VM. It does some window manager magic so that it looks like a single system, windows are color coded by which VM they are running in. It's Redhat on top of Xen. You can make all your VMs be debian. They don't have loops reading untrusted data sources in Qubes.
0-day in the UTXO set to go and exploit the virus scanners that are monitoring the blockchain. There was a recent one where the security researcher sent the exploit demo to the antivirus company, they didn't get the report because it crashed their mail server because it auto-scans the attachment using the same software.
The last commit that I added included some commentary https://github.com/sipa/bitcoin/commit/5b0cd48d9f29f4a839640819230d4e88e077ce40. This was sipa going over everything except the tests. And Tadge did a reimplementation from the bips. He did need to talk on IRC a bit to get it right. And as a result there have been some changes made to the bips. Jonas Nick was doing some coverage analysis and automatic fuzzing of the code as well. Are we still missing seed nodes? We should upgrade some of the seed nodes to only report... or.. petertodd runs the testnet ones. How do you want to approach those tasks?
The least destructive thing is to just add new ones. Add a segwit testnet seed node. Add new ones. Assign to every seed node, add a service flag and the current ones just get no network then you... I already do this for RBF seed nodes. I can take that code and change the numbering. Yeah. Go ahead. Just the refactor of adding service bits to the seed nodes, that's something that we should merge first.
I have a code style question. What do we do for comment blocks? I'm looking at sipa's commentary thing. Adjacent chunks are alternating between slash slash comments and slash star blocks. It depends on whether you want doxygen to pull these up. This is not intended to be merged. It's inconsistent, I noticed now. It's stenography and encoding a help message. If you notice, some are spaces and some are tabs. You alternate it quite nicely.
We should not explain results, but we should explain differences in the comments. Talking about the refactor is not helpful. Convert the text to talking about the differences. Slash slash comments in C99 are not valid comments, so I use those in my own source code so that the text wont get committed. This was fixed in the last year. I think they fixed the variable length arrays.
It's done and ready for review. There's a refactor which needs a test to verify that it's identical. There was ancestor tracking, the CMB refactor, then CPFP uses both. So the pre-reqs are all merged? The refactor isn't merged. If you don't care about the refactor, you can just review the whole thing. It's child pays for parent, not children pay for parent- we don't support that, it's too hard. It's only the first born. ANYONECANPAY but they all can't. Together with Chinese policy, that's perfect. One child per parent (paying). How does it determine which one to pay? You look at each child and decide if it kind of makes it sort it to the top. Once you evaluate the ancestor fee rate, it's updated to include something that is potentially a parent. Whenever we include a transaction, we walk its descendants and put them in a special holding area.
What's the best possible fee when you do an RBF fee bump? 1.5x times the prior fee. Fee estimates might be higher, you need to make sure the new transaction pays for its bandwidth. What do we do if the fees are equal or even lower? Then why are you bumping it? Maybe you want to add an output. There are still cases you want to bump it. A multiplicative increase guarantees you can cover any span of fees, anything smaller and you can't do that. Bram's fee estimation stuff? If we were to do more extensive bumping in the Bitcoin Core wallet, we would probably want to do the pre-sign thing where all the bumps are pre-computed in advance so you don't have to come back. Greenaddress does this. It's pre-signed. I thought it was. Pre-signed means you don't have to go back and get access to the private key again. You can also be slick and locktime the updates. There's no technical reason needed to do that. You can send them off to different people to broadcast.
What is the user experience if someone bumps their fee at the same time that a block gets mined with his original transaction? For the end-user, how does that work? From their point of view, they should click and in they notice that the fee is less than they expected. They should not see both transactions. They should just see one transaction. If you bumped a fee, you hide it immediately, you have the risk of the original getting mined. But it should be shown in the UI as one transaction, that's hard I think. It's all txid based at the moment. It's output based.
The correct thing is easy to imagine for the GUI, which is that there should be one entry ever shown on the screen for some transaction. If you go into more advanced details you might see more detail. Implementing that is a little tricky. In listtransactions, it's output oriented, it's not that different. What about a version in RPC to bump or increase the fee, then we try to calculate the optimized fee, not just 2x the fee? On RPC, on GUI it could be a context menu for increase fee or bump fee. No outputs added.. Well you need new outputs for change sometimes. You should probably have a discussion with Lawrence who implemented this at greenaddress. One challenge is that, say you have bumped a transaction and you had change from that transaction, and you don't know which is going to get confirmed, either one might. You want to make another transaction that would spend the change of the first transaction. And so, what do you do? If you bump, you invalidate it and then you don't know which one. So you create descendent transactions for each one? So adding an extra output is sane. An alternative is that you never do that, you add an output. Well you still have to deal with the case where the original transaction... even if you add the output, what happens if the prior transaction got confirmed? Now you have to still, you still have to make a descendant. You do both.
With wallets, you probably want the wallet to include the user intent recording. This is way different. With bumping, what descendants I was going to pay, then bump add it. It's an invasive rewrite. If you bump a transaction fee, ... you also have to make... in that case you already computed the one that didn't get bumped. You actually have to find all the children of the transaction you are bumping, and then merge them too. And then if the person you spend it to is respending the change, well if they are re-spending unconfirmed transactions from a third-party... well that edge case I'm not worried about. Lawrence implemented a first cut of this, and he realized it's more complex than he thought. I think the minimum set of things, with just bump and not adding more outputs, ... as I said, Lawrence and implemented it and then realized his implementation was not right and then went and did more work. It's not easy. It's in some sense, it's easier to do with precompute kind of approach.
Fee estimator seems to be mostly kind of working. It's a little freaky though at times. I haven't been seeing people using the fee estimator also complain that their transactions are stuck and taking forever. I haven't seen that. I see a lot of complaints on stackexchange about transactions not confirming, although I don't know what wallets. Usually that's places with static fees. Someone was complaining about dynamic fees but have not initialized or something, not enough blocks or something. Reasons to bump the fee, if you target to the next 10 blocks, or target to the next 2 blocks, it's useful yeah. The state of the network changes. The fee estimator can't see the future. The more full blocks you see, the more important fee estimators will become. But I saw some users complaining about the fee estimate, there's no estimate fee one.
Delay when you commit to the MMR so that you have some time to do it. And maybe not with every block. That's another one you can do. I nearly wrote up the version with periodicity instead of always. It's reasonable to calculate it every time. It makes it simpler. It could be a lot faster if you calculate it in batches, potentially.
There's an optimization that you can do in these schemes where you have a portion of data in the MMR tree where nodes aren't keeping everything, but they can keep the top levels of the tree, and the proofs are shorter because you don't have to include those parts in the proofs. This adds 10 MB of storage to the nodes and keeps the proof size down. These optimizations are useful because otherwise you end up with really big proofs like 4 megabytes. You could change how the pruning rules work. The data that you go prune is all consensus critical. If you make this hard assumption that you should provide this data in your block... well you can make the pruning rules more liberal.
Like segwit, we're probably not going to have a good handle on what these schemes feel like without implementing it. In segwit, we did an initial implementation in Elements Alpha, and that experience gave us the confidence to go and do the real thing. petertodd has implemented the code that does this type of stuff; when you get it right, it gets nice. There's lots of ways to screw it up. The implications are easy to get wrong. For segwit, when we first talked about doing this separation of signatures and the rest of the transaction data, everyone's initial impression was that this would be a whole rewrite of the codebase that would be terribly complex. When sipa thought about implementing it, the original version, it turned out that you wrap the serializers and it's done pretty much. And that's perhaps something that should be done here, this should be implemented perhaps not in bitcoin but in some test thing just to get an idea. Fortunately some people want this stuff to get implemented, the data structures are already done. This would be necessary to move these ideas forward.
There's subtle issues, especially with pruning. How do you write code that interacts with this insane ways? When you say you can drop something, well what data is where? How do you write rules that capture this stuff? It's not that trivial. There's a lot of languages that can't deal with these data structures reasonably. There's a lot of boilerplate to bring stuff into and out of memory.
How is this different from jl2012's december proposal? petertodd has a few extra details, but it's mostly the same kind of thing. It sounds like... to my understanding of both, which is superficial, is that they are the same proposals. petertodd was talking about delaying the commitments, so it's a small detail. The general pattern is the same for both. I wanted to have an example to show how this is possible. If you can figure out how to make it faster, sure do it without the delayed commitments.
This idea is not a super one. jl2012 posted about it in December. There's been various forms talked about for a number of years. The realizations like, oh you can split and do both. And how the UTXO sets, and have things fall out of it. That wasn't in the initial ideas, but it makes it a more attractive thought. Or the notion that nodes can keep the top of the trees. I think it's a really cool idea and I hope we can have something like that in bitcoin. I'm not sure how to overcome this automatic crazy response that people have had to it, which is that, every time it's come up in a place where the members of the public, someone responds thinking this is stealing their money. Probably because there have been UTXO pruning proposals, without the proof stuff.
There's some nice sort of properties about this kind of system, like a transaction that needs proofs associated with it, is invalid without the proofs. You could say, you happen to have all the data as an archive node. I could connect to you, ask for an address, I could write a transaction that pays you and also whatever I wanted to pay, and then give it to you without the proofs. Then the archival node operator can add proofs and get it mined. So it has built-in the ability to pay to provide for proofs. There's lots of questions to answer like how do you structure wallets for this? We have analyzed the designs to know that it's straightforward to build a wallet that tracks its own proofs. Unless you had coins stored in a vault completely online and you weren't tracking proofs; so you might need an archival node to provide it for you. It would be a long time until bitcoin is scaled up to the point where people wouldn't be able to do that voluntarily as an archival node.
If the way that the data is managed in the system, it should be, if the design is right, it should be possible to make an archival node specialize in a certain subset of the data. UTXO insertion ordered index, do it by time, that's the easiest way. Then you can find archival nodes that take a certain window or something. Right now today, if you want to spend a coin buried in a safe deposit box or whatever, you need database access to the network to write the spending transaction in order to find the txids. Bitcoin nodes do not provide an address index lookup. You have to scan the blockchain which is really expensive. Or you consult a server, like an electrum server, which will tell you the input. The only extra component is needing the proofs, since we already have that electrum component.
Not sure how to deal with the public response.. if it's implemented in a test response. Ignore the naysayers, say it works. "Recoverable" is bad because it implies it could be lost. People like "archive". It's good, it's safe. And what about paying an archival node? What if it becomes too costly to pay for that proof? Well, transaction fees are going to be more than zero, geeze sorry.
Witness data, as well. But it still, yes it could be made in a way that doesn't cost any additional amount. But with efficient block propagation, you could probably do so without too much moral hazard. You don't want extra data in a transaction not costed that is free-form. But if it's highly structured proof data, then maybe it doesn't really change the "size" of the transaction. So the additional cost then might be just the rescanning cost. The scalability benefits are so obvious in this scheme. Right now, we're at a state where the UTXO size is still manageable but we're going over leveldb performance cliff and it's becoming a lot slower to do lookups. It's not just size of memory, it's also about performance of leveldb. We can maintain it. We can make the DB cache more efficient. Plenty of things to be improved. But this data structure is going to continue to grow. And it's performance-critical. It sets the bar for being able to run a full-node. When we introduced pruning, you could run it with 2 GB of space on your system. But now the system is closer to 4 GB right now. Prune 550 with all the undo data in the blocks? Yeah. The UTXO set size itself serialized... my chainstate directory size is .... I had a pruned node running before I came here but I deleted it to free up some space. Okay, right now it's about 2 GB. Maybe I was thinking 4 GB because of a db log... if you run with a huge DB cache and shutdown, it does not merge the log into the database. So that might also be why you saw a larger size. It's not an urgent issue now, but it could become urgent pretty quickly.
You create a proof of coin history by linearizing and dropping some of the data probabilistically based on coin value. It's not feasible to implement without allowing people to create coins out of thin air. On average, you are creating money out of thin air. But there's no way to get that into bitcoin. It's never going to happen.
We currently have no way to detect local corruption of UTXO set. There are also other problems with leveldb corruption. We need ways to detect leveldb corruption. The CRC may be helping in a way that is invisible right now maybe. If the last log entries are unreadable, we load it in a mode where it ignores them. So it will not give an error, your last writes... it will undo your last writes. This happens transparently, or at least it should be, I don't know if we tested whether it is transparent. Old files might get corrupted, like a hard disk overrides part of your file. So it can detect file-system level corruption. The initial test, it's quite long, especially on machines with slow storage. The hardest part is reading. It's warming up your OS disk cache. One of the unfortunate things about it is that it doesn't prime the coins cache. It used to, until you optimized it away.... which was intentional because it would blow up... it limits how far back it goes, it will only go far as back, with the cache, to avoid increasing peak memory usage. It had two levels of caches. If you did that, you threw it away and kept your cache prime, yeah that would be a big improvement. At the end of the chain state things, still push it... I think the problem is that it gets update, you rewind using the.. so you cannot use key state, but if you can checkblocks for, you reconnect them, you rewind them and play them back. If you left your cache in the state it's in, that should be the state it is in. In checkblocksfor, you have rewound and then reconnected. But it could be changed to 4 as the default. It could be decreased in depth. I think C4 over 3 isn't that much, because it's just script validation and not the disk latency anymore. And script validation has become more faster. And undo is very slow in the code base and a number of other areas. I wouldn't be surprised if they were implicated by checkblocks. I tend to skip the initial check these days, this is bad right because you're bypassing the suffering of the user now. If you're disabling it, that's a sign that something is going wrong. Does anyone here have a dbcache higher than the default? Well, we should increase the default. Or make it automatic.
There's an advise flag in Linux, that you can lose the contents of the page at any time. It's hard to write a program to repair that situation. You have to copy the data every time you use it. Linux has something like that where you can allocate a page of memory like that. There could be a log entry, "this would be way faster if you are willing to use system resources". When people complain about slowness, oh just turn this knob.
We mostly shouldn't have these configuration options, because users aren't going to know how to set these values anyway. But there are corner cases where we don't know how much memory the user wants the system to use. When the user wants to change the setting, they still don't know how anyway. And miners don't want to change these values with flags, even if they want the better performance. Why didn't we make it run that way? And we wrote the software. If you are using bitcoin-qt and a UI, then there should be a drop-down sure. Whereas if you're running it on a terminal, people can probably deal with a config file. QT idea is adding profiles. Yep when bitcoin-qt boots up, tests your GPUs and then enables sha256 or something to start mining. And then it starts ordering ASICs or something.
It feels like people are not happy with the GUI anymore because it's slow and crashy sometimes. It's slow when it's syncing up. I hear that a lot. How many people regularly build and use the wallet here? I think it would be more stable if we would have the node, and the GUI in a separate process communicating over RPC. Awesome, great. Welcome to 4 years ago, since 4 years ago. Nobody did that. There was a wallet over RPC at one point. This isn't a cute thing, right? Just having the wallet separate from the rest of the process would have important security component. GUI separate, is not the wallet separate. There can be a node GUI, then you can connect to your node server. Then we have a wallet GUI. Nobody cares about the node GUI. Statistics would matter to some people. Some did a curses monitor for bitcoind. When it builds, it works fine. But the build is often broken. One of the configs for travis is building the QT-level wallet. Nobody knows this at the moment. You lose the tabs with the transactions. First version of pruning did not support the wallet. It was a blank pane, and then a debug window. I don't want to connect to the Core code, untangle it, then go over RPC. Separating wallet from node is important, but not GUI from logic. p2p interaction is then no longer in the same place as the thing that holds your private keys. It's embarrassing that this is not process separated. Working towards wallet separation requires authentication for private RPC commands...... eh, probably not. Because without having those things, we could still have the process separated on the same host. And a different user on your same system, connects to you, that's not a problem at all. I certainly, I consider the p2p encryption stuff to be a good thing that we should do. It's not needed for separation.
There's a lot of stuff that the current wallet does correct. It makes rewriting from scratch more infeasible. More than one person's mania will carry them through. This is a standard failure mode in history of software development, where lots of companies have tried the let's rewrite. There's probably a minimal way, that deattaches it by brute force, then add a few more RPC messages to get the functionality missing back. Tried to get Armory run on top of RPC. I offered to do work to add commands. Don't write your own node backend, just use RPC. I would be happy to promote Armory as the wallet for Bitcoin Core and he didn't want to do that. He favored a design that inherently was impossible to make scalable.
What was the problem with RPC? If you want to sync your wallet, using RPC. If you're talking about detaching, detaching the wallet from the node- you don't want to do that over RPC. Relaying all transactions and blocks? You still need an RPC layer for fee estimations and things like that, which should be optional because they are impossible for SPV. And SPV doesn't work. It's not about SPV with random nodes. We should not call it SPV. We should call it "trusted nodes" or "trusted local lite" or "detached wallets".
Something actually short-term doable. I recently merged the pull request. The only thing which I had changed is that ... IETF is standard than the OpenSSH standard. They have their own encryption scheme.. it's different. The difference is that they encrypt the length with a separate key, and that's pretty cool, the openssh one designed that one better. We can use code from the openssh implementation. The benefits of encryption, we have a one-time chance to overhaul the message protocol and drop the SHA256 and it will make it faster. Finally, we can drop the worrying about adding fixed line commands in message protocol. No need for 10 bytes of spaces in a tx. Wladimir pointed out that some people are adding in random info. Some nodes were leaking memory into that. There's probably a private key link. libbitcoin has had bugs like that, but they fixed this. How do you end up leaking into that? When you send a 2-byte tx command, some implementations fill in the response with random data from other locations. This might be libbitcoin because they did this in other locations. They were doing it in transaction version numbers. That's how we ended up with negative transaction version numbers in the blockchain. libbitcoin was generating transactions with uncleared memory in their transaction data. That could be fixed as well. Those little fixes would actually end up recovering a non-trivial amount of space lost to the MAC for authentication and encryption. It would recover some of the lost bandwidth.
The only downside to this encryption stuff is that it makes the peer protocol faster. But it will also make it use somewhat more bandwidth. Why faster? Because the sha256... every message in the p2p protocol gets hashed with sha256, and that's pretty slow. It's 4x faster than calling 1305 plus siphash20 (er, chacha20?)... someone on the mailing list pointed out we have a 4 byte length ... ends up in a 4 GB buffer. Because you want to do the MAC afterwards, you need the key. The MAC is of the package table. The length is first, you know the length ahead of time. You need to be able to verify the MAC. The first 3-4 bytes is to decrypt is the length. It's up to the implementation to care about the buffer. In theory there is a message with 4, well let's make it 3 bytes.
We should make an explicit effort to reach out to other implementers to review this proposal. In our past experience, they are more likely to read our code than our bips. But I think they might actually read the bip. So we should reach out to them. This is worth doing even without authentication. It increases privacy. And it makes p2p protocol faster. I would also like to have a good handle on what authentication we hope we would implement before we consider the encryption design done, .... the application is more exposed to export restrictions, unfortunately. Well what about wallet private key encryption? That's only internal to the application and the U.S. export forms only care about communication.
The concrete effect is that if I want to ship a product from the U.S. or Australia or Canada that copies U.S. law, to other places, since I'm shipping a device that contains cryptographic components, I have to fill out some forms with the U.S. government and they ask questions about what kind of cryptographic components it contains. In theory it could delay or mess up, in practice it's not a problem. So the protocol would have to be optional, and have a build time option to disable it. Then you could ship things out. Does it matter if it's NIST or...? Does it make sense to pick one either way? It's not enough of a distinction, unfortunately. They don't care if it's about authentication and authorization, which is what wallet encryption is. I had no problems filling out those forms with respect to that analysis. Yeah, it's authorization to spend from this wallet.
This gives... we already have wrappers around those. They are currently using boost. We need a wrapper around boost. BOOST_THREAD for example. So standard threads we would be able to drop in, except they aren't interoperable. Boost thread, condition variable, this is done, there's a pull request 7993. Is there interest in moving forward with this knowing that it would cause some code churn? Or should we leave it until later? We should postpone this until 0.14. We should focus on the network stuff. We should use this when we are trying to get rid of boost, but that's not a goal for 0.13. There might be some places where boost is easier to get rid of, so once this is in, it's easier to do this sed to replace things. Except for file system, options, we can't replace boost chrono until we've merged this.
Right now, we will, if a new transaction arrives, it gets immediately included in a block. It would be useful so that as we build a block, we should look at arrival time. We should assign a fee modifier to counter propagation issues. To account for propagation. This is also the rational behavior for miners. Miners would penalize a transaction they just received. If they had a somewhat lower fee, ... I went and did some propagation studies where I got some computers around the internet with some synchronized clocks. I logged the accept-to-mempool times for all the nodes. This is a binary thing. Unless, with a compact block, you were relaying.... well, what I implemented was something where there was a fee floor based on the time delay. If you've had a transaction for one second, that implies the fee per byte must be greater than x or else you would skip it in the block construction. This is justified by saying if you're going to include this transaction, then you are going to incur an additional 200 ms propagation delay per hop. Once you have done that, you're willing to include anything- not implemented this part. This could be an extra thing to do, to change state after including these things. If someone sends out a 300 BTC transaction fee, to include that, even if it causes a propagation delay.
The numbers that I got when measuring propagation were interesting. It's an S curve shape that a few get around really fast but most of the convergence happens around 2 or 3 seconds, between 2 nodes. If I have synchronized clocks on 2 nodes, and the time from the first node to see a transaction to the second node, most of them get done in 2-3 seconds right now on the network. Even if you go out to 10 or 20 seconds, it only gets you to 90% of transactions.
Due to the descendent count limit, which might be due to the package limit, if I have A&B which are descendants of some common thing, the order that I get them might determine whether they will be accepted. On the subject of propagation in general, right now in the network, orphan transaction handling is really busted. Because of trickle, it's often used, and that will go away with 0.13... in master right now, there's a thing that sorts the transactions before relaying them. This does not completely eliminate orphans because there's the scenario where one peer sends you transaction A and another peer sends transaction A and then A prime which is a descendant, you would go get A from the one peer, and before it's back to you, you might get A prime first, and then you will get that and then it's an orphan. There's a leak in the orphan pool. There's a size-limited orphan pool, transactions get ordered, and it will randomly drop transactions once it reaches a certain size. The pool is always full, even on master, even in a network where all the nodes run master. The reason for this is that if we accept a block off the network, we don't remove transactions that were included or double spent from the orphan pool. We don't do any orphan pool handling at all when we fix a block. This should be fixed and also backported to 0.12 as well. This is really adversely impacting the relay of transactions on the network.
If you have a chain of multiples, you will often always kill the thing. You never make any productive use of the orphan pool because it's dropping too often too fast. The combination of changes in master, and clearing out the orphan pool when a new block comes in, so then the orphan pool should be mostly empty when new blocks come in and the random eviction code won’t really matter anymore. When we receive a transaction that brings the parent of an orphan into view, the chunk of code to go and handle that is sort of a gnarly mess of code that would have to be duplicated in the block handling or something. I see what you're saying. Yeah. There's two things that we need to fix there. When we connect a block, we should keep a list of all the previous outputs that it spent. Then we should take that list of previous outputs and use that to filter the orphan pool. What about move the orphan pool into the mempool? The conflict removal, beyond conflict removal, what orphans have now become mempoolable? They are not conflicted, their parents are now in the block. We don't do either of these things. Sometimes, if it's deep, and the block conflicted with the parent, you have no idea. There's still going to be a leak. We should expire things that have gone into the mempool. The random eviction is moronic, less moronic I guess. We do another moronic thing... we track which peer gave us the orphan, if a peer disconnects, we throw out all the orphans from that node, it's sort of dumb. This limit has shrunk over time. There's several reasons for the limits in the orphan pool. It can be expensive to process orphans, because if an orphan spends lots of inputs, well, I can make a transaction that's not real it just looks like it, it's entire size is spending outputs that don't exist. It will make the map of prior outputs really large, it's a map so it's not slow, but it's going to take time. It should probably be limited in size not number of outputs. So it sounds like we're going to have a leaking orphan pool, and it should have a time limit. From testing code now in master for relay, at least in the lab environment, my finding is that basically that a transaction never needs to stay in the orphan pool for a couple round-trip times to its peers, so like an hour or 10 minutes. You don't want to wipe it on every block. What if it is in the middle of propagation? I did think it might be reasonable to get rid of this "kill all entries after a peer disconnects" and use a time limit. We also have a time limited mapping data structure, we can use that in relay or here and that would probably be strictly better. It's complicated here because there's indexing data structures. Removing from a map and then also remove it from the previous out index. That's easy to do out of all the possible changes. I spent some time thinking whether we could do better than random eviction. The limited map seems better than random eviction. There's some attacks there maybe? The problem is that it costs nothing to make orphan transactions, you just make up gibberish. You can drive someone's orphan pool to be empty, you can keep flooding it. You can't get banned for it, because the peer will never know that it's all jibberish. You can have an orphan map per peer so that they aren't competing with each other for space. Should we have, should the orphan pool be generalized, or should we have a separate rejects pool? There's two reasons for having a pool for having a rejects pool. We should still keep the rejection filter for the bloom filter, which is space efficient. But it's actually easier to keep around rejected transactions. The compound block relay sometimes has transactions that it could have received but we threw them out. The compact block relay could rescue from that when they show up in blocks. Miner had a different policy than you. The other reason it is good is because of child pays for parent. There's a third case, which is locktime. You might not need it for child pays for parent. You should use replace-by-fee to solve the propagation problem. The threshold for propagation is very low. As a miner or something like that, you would increase your fee income with a smaller rejects pool because of child pays for parent. It would also increase your fee income if there was a user tool for stuck transactions. Having a rejects pool and using CPFP.... it's like the least efficient way to make the transaction ... yeah just pray to the transaction reject pool fairy. Blockchain.info produces transactions where the user sets the fee too low, it will produce transactions that don't even relay, and then it will also spend them. So the user will go to the wallet, they will set it to a fee that doesn't relay, they will spend it, then they will be saying man my transaction doesn't work, then they set their fee to high, then they transact again and it spends the output from it. Then they whine that their transaction has taken 4 hours and it hasn't gone through......
Compact block relay can go take transactions from the orphan rejects pool. But there's a problem with the referencing counter there. C++11. Sad that we can't get rid of the orphans pool. The orphan pool is needed. It has an effect. It really hurts propagation to not have it. Or we have to re-architect propagation. Transaction packages? I think basically we don't need it, with the latest changes to the relay, with a small infant pool, it might just work. You have a transaction with inputs and outputs, you can add to it. You just do replacements. There you go, you have a package.
Wladimir was working on assembly optimizations for sha2. Well, there's one way to make the hashing faster than the assembly- which is, don't do it. Pieter has instrumented the hasher to print out the hashes, to see when the hash has been computed multiple times. And there's sometimes hashing that has occurred multiple times, like 20x. He did this 3 years ago and fixed a bunch of places. This is how we got CMutableTransaction vs CTransaction. Transactions that you reject, or receive multiple times, you will address those. No, this is 6 times in a row. Back-to-back. Oh, that's messed up. We need to fix this. Need to wait for a block to see this again.
Another weird behavior with transaction relay, we get an awful lot of Not Founds. What's happening in the cases that I instrumented, if you look at any node running and looking at stats per message, you have sent and received a whole bunch of Not Found messages. Sent, sure. But received? You will almost certainly sent a lot more than you have received. You will also have received them. What happens at east in the cases, yeah there's some garbage that requests weird stuff. But ignoring that, I think someone is trying to track transaction propagation to see if you have them. Bitcoin originally protected against this attack because the only way to get a transaction was if it is in the relay pool. But now it also checks the mempool. If we move the insertion into the relay pool at the point we actually relay (which it's not, it's ahead of it)... the fallback to go to the mempool... bypasses that. Now, it should be easy to make it so that you don't put it in the relay map until... because we already decided to not relay things not in the mempool. That's a one line change. When someone gets data, we still check the mempool too. So it would not improve the privacy of the system with this 1 line change. I can't get rid of that because of the mempool p2p message.... so I thought about just throwing all the mempool p2p results into the relay map, but no that's ugly and it would run you out of memory. Now you would see transactions and not even the serialized..... so the, in any case, the cause that seems to be what's occurring here is that I think that nodes are receiving transactions and for some reason they are getting jammed on receiving them. They end up pulling them from their nth retry, and the 2 minute spacing has a maximum and it's 10 minutes out, and the real relay limit maximum is 15 minutes. In the cases where it's not just junk, what's happening is that Not Found occurs for like 15 minutes in 1 second. A couple ways to improve this. The whole behavior of the 2 minute offset of fetching things, we should rework the request management of that (Wladimir has some code for this). We should make sure that thing's maximum is slightly smaller than the relay pool's maximum, which could be a max of 20 minutes or something. I'm not sure why nodes are taking 15 minutes to fetch transactions, that suggests to me they are getting DoS attacked, it's inv and not replying, it requires a lot of this in many times to get an inv response take 15 minutes, that's a long time. It's not a tremendous thundering rate, but I expected to see none, especially among my own nodes. I definitely get them between my own nodes. That was something I observed while working on other relay improvements.
Trying to address these privacy leaks related to the p2p network is like, there's so many of them. It's quite difficult. The mempool p2p message makes it very hard to do anything useful here. Pieter improved it slightly by putting the mempool p2p behind the same poisson time delay. It should have a longer one. That helps. My earlier attempts to improve it broke things. So that's good. Because nodes have the ability to just get data to check, they can use that for tracing. And I think some people are using that for tracking. And why should SPV nodes care about the mempool? Since when do they do that? That's pretty broken.
Mempool p2p messaging behavior in a bit of ways can be changed, but it often breaks things. You can filter it with the already-sent map. Maybe you will send 2 MB of data once, and then it's already known and won’t send it again. The only thing that broke was the block tester stuff. The block tester was using the mempool p2p message to find out things. If your mempool is big, it's a Dos vector. It's a privacy problem. And getdata has to go to the mempool not just the relay pool, which is a privacy problem. What stops us from turning it off? Most ... well maybe next release we say we're deprecating something. SPV wallets are using "inv" data ??? Send it through the bloom filter, that probably doesn't break anything, and if it breaks any of them then we should consider that acceptable at that point anyway.
Filtering and rate limiting won’t harm SPV clients. What if the SPV client gets blocked? Should it get disconnected if it breaks the rate limit? My observation is that no SPV clients are sending back-to-back mempool requests. I looked at violations or something, I used to believe that they only sent the requests at the start. But I found out that was untrue. It's any change with their key system. Some say it's only at the start and only once. I see one and then one tens minute later. If you send it within 1 minute, we just drop the connection immediately or something, that would be interesting.
Subsequent mempool requests won’t be needed in the case where you're trying to get descendants and all descendants are given in the mempool response. He's using a bloom filter, he's updating the filter and then does a mempool request. It's not an insane construction. If it's down your bip32 list, they should not have any payments to them anyway. If you had the same wallet in multiple devices at the same time, where something was in use down the bip32 list. The consequences of not doing mempool requests was not seeing zero-conf transactions anyway, but in SPV you should not be showing them.... people just think it looks cool. Seeing it immediately, people think that looks cool. You do need to know that your own coin is spent.
I have another application for something like the mempool command, but not the same thing. It comes up in compact block relay. If you are using this, then your hit rate is almost zero for the first several hours you're running because it has nothing in it. When you first start, prime your mempool with your mempool command. You want 1 MB of it or something. Mempool RPC that only gives you 2 MB of mempool. I have a patch where I added an RPC call that will send a mempool thing to the first node, while it gets results, it's relaying the other one. That's how I glue my mempools together. Patrick has a patch to do something similar to that too. If we want to have that, maybe it's a separate new p2p message. It's mempool synchronization. It's a pretty cool model. Every block, send the command again, it will always keep you a block ahead of the network. You could run a very small mempool like at 2 MB limited mempool on your node and still have a very high hit rate for the compact block stuff. You're not wasting bandwidth in doing this, there's no duplicate data between those calls, you're free-riding off their memory but not really their bandwidth. It's less bad. It's kinda cool. That could just be a new p2p message for mempool syncing. When bloom filters are disabled, we can disable mempool and turn off answering getdata queries out of anything but the relay map. We may actually have bugs in the relay map handling, that we're papering over by falling back to the mempool. We could look over this. This would be a step forward. If we get rid of the block tester stuff, then there's a bunch of other limitations that we can apply to the mempool RPC. We should still limit the filtered mempool excess to 1 per day or however we want to limit it, maybe 1 per 10 minutes, just not back to back. 1 per minute would probably be fine, too.
It should also protect the last N nodes who had relayed you a block, or a transaction that you accepted. The only reason not done yet is that the locking situation around the CNodes stuff is terrible. There's all sorts of unsafe data races already with the node stats. It's unclear what locks what, it's just terrible. For the stuff I would like to have reviewed tomorrow, I encapsulated that so you could see what locks what. The touches are only done in one place, so they can be restructured to be better. What I would do before taking the training wheels off the eviction, I would add a stat which is the last time when a block was accepted from this peer, and I would add a stat like last time a transaction was accepted from this peer and use that as eviction criteria. I think I even wrote a patch to do that. "What lock protects this?" Well... something else in that structure got updated by something a bit earlier, I assume it was protected by a lock. But no, it wasn't.
It's a pity that the bloom option was peer bloom filters instead of bloomspv. I'm just saying it looks bad because it's not the right name for the option. It will get bikeshedded. It's not used otherwise. When I went to complain about the mempool p2p message, Jeff's response was "take it out" and turn it off. The reason he wrote it was because Bitpay was monitoring transaction propagation in the network to determine double spend risks. You can hang up on someone if they immediately come back after disconnecting them from mempool p2p messages; it's really just the spy folks who are doing that. But you don't know if someone is doing "addnode" to you.
A few attorneys were-- where parties agreed not to surveil each other, a protocol handshake where they both agree to not surveil each other, that this would be enforceable. If you required, as part of the protocol, you send a message that you promise to not take the transaction data to monitor it or whatever, and as part of the handshake as part of the requirement would mean that this is enforceable, even if this is automated and there's no human participation. Has tor implemented this? Nope not yet. Many of the companies attacking the Bitcoin network with lots of monitoring, it's something that will concern them and maybe even stop them. It's pretty easy to change the code. But if you have it two ways where it's required, but I'll hang-up. I'll relay blocks, but not transactions because you didn't send the pledge. Make it something where you sign a message with your prune node key. It's probably not something we should do in the Bitcoin protocol right now. But perhaps in lightning or something. Having a separate high-latency mix network for transaction announcement, which is what people are trying to do- they are trying to figure out where it came from. It could be an additional non-technical measure. This could kinda work, if you are some random hacker you don't care, but the people like [...] etc would be gummed up. This can also apply to the "you reconnected too fast" kind of ...d if someone has addnoded you, maybe they also give a promise to not be malicious. I guess it would be "I'm not trying to monitor the network" promise. And if they didn't, you would ban them for reconnecting too quickly. Maybe a "signpdf" message for the p2p protocol, and then we can finally catch Craig Wright badsigning.
Bittorrent DHTs make it hard to poison them unless you have lots of IP addresses in different /8s, and there are companies that sell this as a service to get those IP addresses. Having centralized directories could help. ... there's things we can do to reduce abuse on the p2p network, but the spying attacks are bad.d Connection slot attacks, we have some good defenses. We could add hashcash on connect, that could be an additional boost to not getting disconnected. How about bitcoin on connect? Yeah. That's bound to happen at some point. Maybe. The criteria is... we need lightning there. The criteria of relaying and accepted... what you want to do for bitcoin on connect, is that you want to do a probabilistic payment probably, you pay bitcoin to something like a Chaum server, get a Chaum token, then go to a node and give it the Chaum token, the node goes back to the Chaum server and gets money in a payment channel. This allows you to not have a payment channel between the pair of nodes, not even a payment graph. How is that Chaum server going to comply with KYC? It's access fees. It's fixed access fees. It's de minimis, so it doesn't count.
Tor uses a centralized directory. And for onion routing with lightning, that's sounding like the one option. The way tor works is that they are monitoring the network and they know who are running those nodes. We can setup a transaction relay thing on tor, you connect to it, you give the transaction, it waits and then sends it. Your privacy at that point is no worse than tor. But that isn't solving the problem, it's hoping someone else solves the problem. One of the models thinking about, with the poisson delay stuff we're doing now, we could have lower poisson delays for tor peers, so you would tend to announce transactions first over tor by default. It fits nicely. The way that the code works, thanks for implementing it Pieter, you can have different delays for different peers. That was not a design consideration, but yeah it worked out. It doesn't right now, I wrote some code to do that a while back, shortest delay peer etc. It should prefer tor, then it should prefer longer time outbound connections. Right now it is a bias towards outbound. It relays faster towards outbound because you control those connections. This is increasing privacy at the expense of transaction propagation, it's a tradeoff.
We know when transactions are added to the mempool, we know when someone requested the data, so we can keep that data and not return new information. You can do it when they connect. This is used, the purpose is to .. to get things from before they connected. One of my patches did that, and I found it broke things. I also did a thing where I tried to do a thing where I put the mempool RPC on a 15 second time delay, it would return immediately with some quantizations so it wouldn't grow too rapidly. Pieter's delay for the mempool is better for that. If you have a simple filter like this, one that will have an aggressive response to a spammer mempool request, well you can get rid of the thing. If they keep spamming the request in order to move that timer forward. If you spam more than a few times in a row, well just drop them. When you do a mempool request, at the end of generating the request you set the time. And then you check the .... I suspect that's not going to be totally clean to implement. We know by the timestamp of when the network code received it, even if the processing is much later. In mempool lookup function, you can read the timeout right there and compare that to your mempool request found and return not found. That's good. That's cute. Yeah, I suspect people that are doing "getdata"-based transaction tracking wont realize that happened for a long time and their data will just go to crap.
We have a flag already for when a mempool request has been requested, although it's not latching. That's a good start. It's almost what we wanted. It's not protected by anything. I hope that's protected by something. There's definitely data races. The relay flag thing is probably actually unsafe. We might concurrently check the flag and also set it elsewhere at the same time. It might only be accessed from message processing, maybe some RPC accesses it?
bitcoinj has a thing where you only propagate a transaction to half your peers, then you listen on the other ones and see if it comes back. This also weakly helps with the spying thing too where you don't send it to all the peers. You don't want the originator of a transaction to behave no differently than an arbitrary node. Bitcoin Core should have a similar feature in its wallet. Everything is relayed with delays in Core now. Later on, there should be less of a difference between outbound and inbound peers. The rest of the inbounds can have 20 second delays or longer. There's no reason to have much faster than that for most of the peers, except for a few that are used much more quickly.
pgp clearsign has this undocumented feature where if you both sign the message with the same message path, you can concatenate the two signatures and the signatures are valid. They have to sign the same message hash. You can make a single message with a bunch of signatures. If you run gpg verify on it, it checks out for the signatures. We did this for a letter to Debian to tell them to stop packaging our stuff. They wouldn't update bitcoin package, and it made it clear it didn't work on big endian hosts, because the tests were failing. And the earlier version was working because it compiled on big endian and didn't include tests back then. petertodd has done this for a multisig git commit. Fedora turns off tests when they notice them running, and they have done this for the bitcoin package.
For bitcoin, the standard tests... well, something recently made them much slower. It's taking 10s of seconds for me now. "Recently" might be months. It used to be like 1 second total. This was just "make check", not RPC tests. It runs "test bitcoin". My comment was limited to that.
Perhaps there should be an option to run RPC tests from the makefile. It would be nice to parallelize this along with the others. I don't remember why it wasn't pushed.
If you can get something like OP_RANDOM to work, then you don't need to have bitcoin mining in the first place and mining could be replaced.
"Malleability of the blockchain's entropy" https://eprint.iacr.org/2016/370.pdf
There are a number of people that say "no, don't keep the GUI" but this is frustrating because there are many users that require the GUI. jonasschnelli wants to use a clone of bitcoin.git to create a separate wallet that uses RPC and the p2p protocol, while keeping most of the same build system.
Longpoll would be an efficient solution. You can go over p2p protocol, but then you have to setup handshakes. Every language has support for longpoll. You don't need zeromq or magic dependencies. I think it's more, hard to say, I would guess the guarantees are higher that they would receive your notification because the state is held in bitcoind. There's a memory limit, at the moment I guess it's a couple of megabytes, which can be configured. In zeromq, it's all behind the scenes and you don't know how to configure it. In RPC, there's sequence numbers. It can handle notify as well.
I would like in time to eliminate the notify calls. Calling into a shell is really ugly, it's insecure and super inefficient and tying up resources, potential for DoS vectors. You could provide a script that consumes zeromq of notify. It restricts how much bitcoind can be sandboxed. If you want to restrict the set of system calls is unsupported for as long as you're calling out to a shell. This is basically what all exploits do for the shell, too. The depreciation strategy could be removing it really quickly. We have to announce the deprecation in the release notes. We first have to use an alternative. Then we have to have both in the same release. With removing it will, always provide a python script that has the same output. It will be a three line script without dependencies. With zeromq you always have some annoying dependencies. It causes strange issues. All dependencies, most dependencies showed up to be kind of painful, like openssl, zeromq, boost is also kind of a showstopper on performance. It's painful for us, but also painful for people who want to use the interface.
We have a nice depends system now that builds all of our dependencies, which is less of an issue for all the people using the dependencies. People will have to learn how zeromq works, just to build... longpoll is really, it's, AJAX is essentially longpolling. Also the push notifications... it's a synchronous handler you install for new data when it arrives, but it's modeled as a separate thread that waits for longpoll. Because it's HTTP, it can go across net, SSL, caching, .... we could implement it differently because we have an event-driven HTTP server. We don't need to support 1000s of listeners. The old notification system was really inefficient, so this is at least a step forward. We could dive a bit more into using events. Although it's not a priority at the moment.
cmake architecture does not look like an improvement. Bazel looks better from an architectural standpoint. cmake suffers from, everyone who wrote it was probably Windows-only and had never used anything but Windows. And thden they extended it to make Unix-style makefiles. It suffers from Windowisms, it's the same thing that the Windows people say about us but the other way around. But how do you learn make? Well there's cross-builds in there, for example. Codium, big media center application, ... did the depends system for that, years ago, and it had like 60 dependencies. Ported that to Android, OSX, Linux, Windows, BSD, and so we got tired of being in that situation where we had custom crap for everything. So we had one unified build system for it. In the meantime, I learned a ton about cross at the mean time, because the android cross stuff was primitive. I contributed to the Android NDK and helped with their toolchain. That was my big oomph.
With gnuradio, you can use numpy to algorithmically generate golden test case input and output for unit testing. Opus contributor.
sipa: i suggested you could just require the last 32 bits of the merkle root to be a constant, allowing a much stronger optimization (at the cost of 2^32 grinding work on the merkle root) that subsumes the asicboost optimization in a way that doesn't need the patent. gmaxwell commented that you can also count blocks that don't do that as having lower work.
gmaxwell: or, alternatively, you require the first 32 bits fixed, and now grinding the root per the patent is unreasonably expensive. And either of these two things can do the work modification. so e.g. asicboost hardware would continue to work, but effectively have a 20% higher target, neutralizing the advantage.
How do you want to serialize and so on? It seems like, we have the client version, so why not use that as the trigger? Yes. Before, I think, client version... it's not once after this protocol version you always have witnesses. You want the ability to not ask for a witness. If that wasn't the case, as of this point everything would be witness, then yes you can use the client version. But the client version is a highly overloaded vaguely defined thing and I'm not sure we want that. I dislike that it's fed directly into the serialization code. I want to get rid of that.
It seemed like for this pull request it would have been easier to use the client version. I don't even know what the color looks like. This is for segnet. The color for testnet is nice. Elements is purple. Matt doesn't like the testnet color because he can't tell the difference between testnet and regular. I tried to squash everything but it's a whole lot of work. I want to squash the segwit fixups into a single commit. Non-squashed is annoying for someone who hasn't been following the pull request's progress. I wanted to make a squashed version to see how much work it would be, and hopefully reuse. There are a few changes that are hard because they change things all over the place, like the negative witness serialization flag.
Cleanstack was only a standardness rule.
We need a way in Bitcoin Core to mark service bits that DNS seeds provide. https://github.com/sipa/bitcoin-seeder/pull/36/files
The witness includes the program and the signatures. The script witness... the witness is an abstract term because there's a witness of a block, witness of a transaction, witness of a script. There's also a "witness program".
What about the way caching works on transaction signature verification? I don't recall writing a test about this. The signature cache is just a.. what's the key again? It's sighash pubkey signature triplets. So it's just math. It has no knowledge beyond... That's not true, stop arguing with yourself sipa. Script sighash. I remember needing to make changes to sigcache for segwit. We needed to make changes to the amount. Oh. Did we fix bitcoin-tx for the amounts stuff? Yes, but it has not been tested yet.
What are your plans in segwit for the n square removing fix hashing stuff? There's a patch by Nicolas. I sort of want to not burden the first implementation of segwit, or the first round of review with that because it's not necessary for correctness. I think we'll merge segwit without it, but not release until the caching is in. So the caching stuff will be a second pull request. We will merge that next. But the pull request won’t be published until segwit is merged, because it would be distracting otherwise.... there's also some fixups, cross cut etc. I know. I looked at that because I wanted to suggest splitting the wallet into another pull request.
Perhaps we should just merge the segwit pull request prematurely, without a release. The problem is that people run master. If we merge, then things will break, people run off of master, you forget about the diff, and you forget about the tests that haven't been written yet. It's annoying that people run masters.
Signature version is determined by things outside the script. This is for reinterpreting op codes that already exist, right? You would gate the introduction of new opcodes by sig version. Otherwise, as long as you're restricted to redefining NOPs, sure.. But if you want to make any other change, including adding a new opcode, is that determined by sig version? Wouldn't that be determined by...? A new opcode would be a new opcode, sig version just passes through to the signature checker as far as I can tell.
What if we want an OP_CHECKSCHNORRCHECKSIG opcode. In bitcoin scripts, you cannot introduce that. So you're saying stick the version again to introduce the new opcode? It would be a new one. It would only be valid in new things. So you would use sig version for that? But it's the witness program version. The sig version determines your signature hash. Not your script version. That's the only thing it currently does, but it could do anything. Say we want to introduce OP_SCHNORR thing. Concretely, I would take in this range between OP_NOP10 and OP_... there is a bunch of things that are invalid. I would add OP_SCHNORR there. And then have in the execution, if sig version is 0 or 1, it's a failure, otherwise it works as OP_SCHNORR and then you can make a new witness version (version 1), and you can't do it in version 0. Well, version 2. Only 0 is defined right now. We changed it back. Okay, awesome.
It's confusing because you can have a signature version and a script version. Right now it's signature version. Signature version is about signature hash. We're going to be using a varint. You would have two layers, signature version and script version and there would be an indirection between them. But perhaps that would be pushing it too hard, and it should be merged into a single thing. Sig version is for redefining signature operations. Script version would be a better name, I agree. It's defined by things (not exactly) defined by things outside of the script, it's actually uh... the value is, the script version is determined by the pubkey... that's the thing you're spending. That's a script. But it's not necessarily the script. It was unclear. There might be situations where you need to increment both. But you might not need to. If you do OP_MAST, you need to bump signature version the way this is implemented. If you look at Johnson's code, he showed us how he would do OP_MAST. So you bumped the version? You stuck it right there in that function I think. You didn't even have it as an opcode. The name "sig version" is good in that the currently the only thing it determines is the version of how the signature hash is computed. Later we might need to generalize the layering between that's a sript version, certain script versions would match to certain signature versions. Should we call that a witness version? That's something that the script does not have access to. You would get rid of the signature version altogether. That's going to cause changing a variable name in over 100 places, that's a messy diff...d. Putting aside that name, just conceptually, does that concept make sense to you? Changing the sig version to witness version. EvalScript doesn't have access to the witness version, which is stored in data outside of this. Witness version could apply to anything... we could do an OP_EVAL like thing that is completely independent of segwit that also uses that. That's how I would do it. Independent of segwit in the sense that we introduce OP_EVAL which is a redefinition of a NOP, takes a script from the stack and executes it, and it would also use exactly this mechanism. Witness version? Validation version? It's a bit more subtle in that script verification has two layers, it's the evaluation and the verification. You have EvalScript and VerifyScript, and VerifyScript determines what the sig version is. The signature for VerifyScript does not take the signature version, it figures it out. That's what it should do, because it's completely internal to the script verification module but not the evaluation logic. That's an argument for why it shouldn't be something all over the place. It's only all over the place in the signing code because there it crosses layers much more than here. Here, it's perfectly contained to EvalScript. And this indirection that we're speaking about, actually already exists in the form of the flags. The flags are determined externally, and VerifyScript uses the flags, and the contents of the scriptpubkey, to determine the contents of the script program version is. I think currently doing, so, assuming you want to do something like Schnorr signatures, you'll probably want having script versions and then signature versions, and there's a mapping from script versions to signature versions, but not every script version increase has a new signature version increase. One would assume the other. If we would want to do that now, I think that's overkill and complicating the patch. What we'll do later will just be we'll two enum types, and C++ warns if you use the wrong one. We would do conversion from one enum to another. Sig version can be changed to an enum already. That's the witness version number which maps to what will be the script version number, which maps to what will be called the signature version number. If you want to add OP_SCHNORR, you'll do it in witness v1, if this is sig version v1 then this opcode is enabled or something. I actually think we may not need script and sig, because they're, so the alternative is you just pass through the script version to the EvalScript function. Does that already exist, script version? It will be the new name for sig version. Then we're replacing... we'll call it script version rather than sig version, we will be fine if we do that now and not need any new changes. So you could just redo this commit if you wanted to, yes. Yes, but the variable names are still sig version all over the place. So maybe every line that has sigversion will be touched.... I think it might be something to figure out later.
If you renamed to script version, it would map to... something like OP_EVAL would match. Conceptually, you could. It might be something beyond witness data, perhaps from another chain if we ever figure that out. That would also result in new script versions. The witness version is just a means for indicating a script version. Right now they are the same, in the future they could differ? There's base, script v0, and base is separate from the witness version. Oh, very well. Base is implied, though. No, you need an entry for it. There are currently two script versions. If you passed the script version, then you basically have to have a non-witness version, you're right you're right, okay.
Whenever you relay, you should bump the expiration time forward. I added a minute to the expiration time, which mostly makes up for not having that.
Replace-by-script. Not the script, but The Script. Just take the scripts out of everything. Declare you own something, there you go. It's a lot of work. If we took out scripts entirely, that would solve a lot of the validation cost.
There was a transaction I made, where all the scriptsigs are really short, with a bunch of inputs, and nobody noticed. I was disappointed. The most hilarious thing was somewhat recently, jl2012 asked me about a transaction I made back in 2012. What was unusual about that transaction was that I used some sighash flags so that I wouldn't sign it. Well, if you want people to ask about that, well then sign and then I will know to ask you. Apparently this is one of two uses of the sighash flags to like ANYONECANPAY, and one input. It's one of the only ANYONECANPAY transactions, with sighash single, with more than one input. There's two transactions. Sighash single, corresponding, what's the use again.
Sighash single, anyonecanpay, and more than one input, there's a semantics change from before segwit to after segwit. The behavior change seems to be desirable, the behavior in segwit is more useful. The issue is that the old version signed something the new version didn't sign, yeah the position. Right, thank you.
There is a question on stackexchange, where someone suggests associating addresses not just with private key but with a ... for tax purposes, why not just prove it? Between those two keys, use a multisig. You want to lose the one you want to be able to prove it, rather than keeping the money. You can still do that. You just need a covenant, where you can spend it with this key, or you can spend it with this other key, but when you spend it with another key the output should be constrained, you must have an output that pays the value of the input. You have a coin, you lost the private key, but then you want to prove you destroyed the funds. So you need to destroy it. You don't have to prove you lost the key, you don't lose the burn key, you can blackhole it with the other key or something? There's much less attractiveness in bothering to burn, because you don't gain anything. Right, you're burning the amount "to prevent insurance fraud". Somehow I don't think this will invite a lot of insurance to the market.
You could make it contingent on you being repaid, then you don't care if someone does it. That's clever. You could construct this with a covenant so that you coinjoin the... yeah.
P2SH is all standard. But back when it was made standard, it was the same code being used. It was our intention to allow bare multisig back then. But no real use ever arose. But a very small one; otherwise be standard or be stopped by this, well perhaps they should be made not standard. You should give some rationale about that. These are orthogonal. I would first look to see if they are being used. I know they're not being widely used. Get transactions sigop cost is consensus critical, right? Yes.
What if the witness data doesn't match the commitment? Does that ban the node that sent you that? Yes, but only if it's a witness node. Do you even handle that? Wrong witness data and no witness data are treated as the same. That's actually not true.
dev /urandom alpha, choose from words from a random word list, deterministic RSA key generation, but how to correctly format?
Is it worse to update an element already in the tree, or just adding a new one? It's basically the same scaling. The k factor ends up being different, but it's essentially different. It's less efficient because the k values are bigger, but it's within a factor of 10 for sure in terms of absolute difference. Updating something in the middle you need all the data to get down to the bottom of the tree. When you add something new to the end, more of that data is less likely novel.
You can put a small upper bound on this. log 2n is pretty small in the real world; some people might mistakenly use the term O(1) for that case. And also, depending on the implementation... on some implementations, it can be essentially constant because that bit of data to add a new element is something you already have. Whereas changing something far back in tie, you wouldn't have that data. You would have to bring that back in. It's a locality thing. If you change a leaf, you have to update all the ... did you look at the examples? If you add new stuff, in most cases, you will already have the required data in cache and it's actually fast.
All I'm saying is that to add a new element to the data, is a data you touched to add a previous element, so it's all going to be in cache. To add an arbitrary element in the tree, you will have to bring in more old data which will involve disk io. In practice, this is a big improvement. Big O notation scalability discussions often have nothing to do with the real world.
If we had a tree that we never have to read from, then why not write to /dev/null? It's pointless to only record spends. What are you going to do with them later? For a given output, you query the tree, then you look for boolean spent or not. The problem with a UTXO thing like that, if you can query from it based on a random input, it's pointless to just make a tree. You have to have a purpose. You're going to query it later. How are you going to query the STXO tree? What key? Not using the transaction id. Just using the position. If I was using the txid, it would be pointless. I'm using the position, the insertion order. The position of the output within every single output ever created. Not within a transaction, but within the whole chain. Because you do that, you have locality. Whereas if it's just purely by hash or txid, you have no locality and can't add things.
With pruning, there's no such thing as one having more entries than the other. The amount of data you commit to is irrelevant. The amount of data that you need is far more relevant. If I have a data structure in a merkle sum tree, with number of items unspent under that tree, if I prune it every time I see a node that everything under it spent, for all intents and purposes I've thrown away the data. It's not relevant anymore. With a STXO tree, because you have to index it by order of creation, you already have, you already have something committing to all potential outputs. It's not really correct to say that it does not have the data, but you have the data in a format where it doesn't know it's in there anyway. Once you combine that, you might as well ...... You have to update nodes. Adding a node is an update. Adding is not necessarily cheaper.
When you add a leaf to this tree, if for instance you do this based on order of transaction spent, it's useless. You'd need to make a proof that a spend doesn't exist. That's not useful. What key are going to query in that STXO set? If you are querying by position, you might as well combine STXO and UTXO trees. When I spend, I have to add it to the right position.
It acts as a preprocessor that acts over the source code. The changes are made by changing certain bytes. Then you run it into a loop. Most of the changes don't compile. It'll go through and compile, if it successfully compiles then great, if the tests pass then it saves the source code. I've only used it on a thousand lines of code. I only choose one character at a time. I have a set of character replacements. It only considers 40 possibilities at most. Usually I run this on code where I have stripped out the comments. Replace the greater than symbol and the plus sign. You replace greater than with less than, and plus sign with minus sign, then try all of those. They should all compile pretty much. All of them should break the tests. The symbols don't even occur often in comments, either.
There are tools to do this for java, which are syntax-aware and do smart stuff. There's a tool that someone made for C but it wont do C++. Even that tool, mylu or something. Making a tool syntax aware for C++, you can do it based on clang or llvm, sure... another way to do this woud be to mutate the object code instead, or any lower level code, well it becomes difficult for you to find out if the change was even meaningful. Well, you could also do transaction fuzzing, which Jonas is doing.
Code mutation tests are basically useless to do unless you have 100% test coverage. Wallet test coverage is so poor that it might be the place to start from. It's scary to have untested wallet code. You can have a sorted list of tests, then when there's a mutation test triggering failure, then you move it to the front of the list. So the ones that break things the fastest end up at the front of the list, to reduce the total amount of time you spend testing. travis-ci should be doing this, but it doesn't. Jenkins has a thing to do this.
Tadge's reimplementation doesn't currently have tests. They might be able to run with the p2p tests. If they had written tests, then we could swap tests. We can't do mutation testing on their implementation if they don't have tests that would fail from the mutations.
Which branch should be used for the mutation testing? use segwit-master
"Get script for witness", it's kind of vague, who's getting what for what? There's also "get script for multisig". Does it parallel that? Yeah. What gets you the OP_HASH160? Okay, that's where the parallel comes from. There's getscript for p2sh which should give you a CScriptID and gives you a CScript, okay that explains the nomenclature.
Not even businesses would run Core without the GUI, the size of the network would go down, only a niche amount of developers would run this stuff anymore. I think we've seen a bunch of that right now. The current level is not entirely the GUI. It could become worse if we tossed the GUI. How many reachable nodes are running GUIs? Windows is fixed now, but we ... from 0.10 until 0.11.1 or whenever when you fixed it, it reliably corrupted itself on Window, like if you switch off the power without shutting down first, not if you just use it on a stable computer. It worked.
The problems I was concerned with seem to be 99% fixed or entirely fixed. I won’t deny that there are problems on Windows. It would be great to have a Windows maintainer that actively fixes issues. The user gets privacy and security by running Core, but unfortunately they don't know this. And we don't advertise this. "This is the most secure thing, oh warning it downloads a lot and it's slow". And also the Bitcoin Privacy Report thing wrote that it's not very private (but was somewhat wrong).
We should add a SPV mode during sync mode, which can take a week. This way we can have a transaction list. Yes this is very important. Then if you take 3 weeks to have full validation... it's also the fact that for many consumers, if it's just a background service thing you install, they are not incentivized. Even if they know why they run it, people aren't incentivized to upgrade it. It's a usable application with new features, people want a new version, but if it's just something running in the background then people might not upgrade. I think the GUI is why some people upgrade.
If that's the argument, then wouldn't you argue that it's important to ship segwit wallet features with the GUI in the release? Just to get it deployed. If it's an argument about a way to get people to upgrade, then give them new wallet features. bip9 segwit activation as a little green light on the GUI. They are expecting little graphs and charts. That's why I am working on longpoll. An easy way to run a GUI that can connect to your remote server. Most people are not running nodes on your main computer. We don't know that, you certainly don't run them on your phone.
What's really needed is an incentive to run a node, too. This is a consensus problem, though. Visualizing the network, security, privacy, continued health and survival of the network. If there were no full nodes on the network, we would have a problem. If the number of reachable nodes drop below some value, sure I'll fire one up in private. Maybe... Maybe you know what other people are doing. But maybe there are 50 nodes in a world, and 1 person doing a sybil attack. Are you capable of validating? If so, then how different is that from not transacting often? But are you capable of validating really if you are not now? Maybe. Validating one month out of the year, is that a problem? Maybe not... What matters for the network isn't some abstract that you're capable of validating. How bad would things have to get before you would start to validate? It's not about how bad things become, but how much benefit do you get from it?
If I am not doing many transactions, maybe I don't care enough. You want the barrier to running one to be as low as possible. What about transactions encoding evidence of validating the blockchain? Fully delegatable. You don't validate, someone else does, it's fully outsourceable. Validation doesn't have an inherent marginal cost, so someone else can validate it for someone else. You want the validation cost close to zero, if you can do that then none of this else matters. At some point, the validation cost is your personal time investigating this. There's a multiplicative effect where, if the resource cost for a node were zero, then the amount of time you have to spend bringing it up is spent clicking clicking done. If the node takes 80 GB, then the time you have to put into dealing with it are much greater. These things do not exist in isolation.
You'll have to understand bitcoin well enough, figure out how to get software, how to install it, how to maintain it, I mean... same experiment with mining. This is something to measure. The total friction to installation and validation. We can substitute this goal with running a bitcoin wallet, you have no clue what a full-node is, but here's a bitcoin wallet that people told you to run, and you ran it, and you got a full node on the side. My takeaway given that task would be, I got this running, how long until I can use it? Well we can fix this. This is the motivation for headers-first and maybe SPV-mode syncing. Instead of downloading from block zero, scan them for transactions, then go back, you saved the blocks on disk, then later sync up the chain. I'm going to bet that the patch to do this is going to be under 100 lines, although really tricky 100 lines. Look how small pruning was to implement. It took a bunch of time. No, I think it was a bigger pull. It was touching all over the place, and in stupid unexpected ways too.
Could we work towards a proof that all solutions are going to be outsourceable, even when coupled with transactions? And what do we consider to be an effective defense to outsourcing? e.g. people still store their private keys in plaintext with third-parties....
It's scary how often he pops up and asks about pruned nodes. Some of us don't think about the implications enough about pruned nodes, especially with soft-forks. If the wallet were to break on the pruned node, I wouldn't notice. That gets some attention. More configurations are more. I think that for the SPV mode, what we do is only do it during bootstrap at least initially, or when you have to catch-up for a couple days, like when you're offline for a few days. Many people forget about this case. We may not be able to do much better there, because if we do this SPV moded, I would like us if at all possible to not implement bloom filter, and we don't want to break privacy. That's a very good point. It would probably be a lot less work to not implement bloom filter. We might do well going forward to avoid calling this SPV mode, we should call this "catching up mode" or "validationless mode". "Eventual security mode". Dangerously fast mode, danger mode.
The added cost to the user of having eventual full security is basically zero, as long as they are not paying for the bandwidth by the megabyte. It could even do the sync slow, like adding sleeps to the process if necessary. There's someone in #bitcoin-core-dev complaining about 6 weeks of syncing on their Windows system, not knowing that their chain files are corrupted.
If you take multibit right now, and you import an old key with the date set way back, it takes over a day to synchronize. I'm sure it's probably multibit's fault and they could do it much faster, but right now the initial usability from multibit is that it doesn't do anything it doesn't even bloom scan it. So I don't think.... what does other software do? Electrum does has a server with indexes. What does breadwallet do? They scan headers until a week before your oldest key birth date, and then a week before your oldest key birth date, they start downloading filtered transactions. So they just sacrifice privacy? Okay. That's pretty fast though, right? It's extremely fast. Alex did a test importing a key in breadwallet, and it's really fast-- like a couple of minutes.
If you organized the old blocks in a tree and used efficient tree proof thing, you could do block download of log n blocks and get some insurance that you have the correct thing, assurance that all the blocks are secure because you are randomly sampling them. Get rid of the header download time? Download the headers, get all the blocks backwards. You can't validate them. You can't fully validate them, but you can see that they are consistent or something. I don't know if that necessarily has a lot of value. You can get to 95% assurance... you can't check a single signature in that case. You don't have the inputs. Adam is suggesting randomly sampling blocks. You don't have the UTXO set. What you would like to do is... validating blocks in reverse, wasn't that on the mailing list? If you have UTXO commitments, you can do this fine. I think someone had a proposal without UTXO commitments.
SPV without bloom filters-- it is probably a bad idea to download blocks and throw them away. What about storing a random subset and figure out how to stream them to other nodes? I made a thread on bitcointalk a few years ago. The main challenge is that you need an efficient way of signalling in address message what ranges you have. And then given that efficient piece of information, I should be able to figure out which nodes have those blocks. I gave a scheme that is efficient and only required 32 bits of data in the address message, but to figure out what blocks a given node had, it required computation linear in the number of blocks. You signal you are keeping X amount of blocks, a random seed, and then someone can basically use that random seed to figure out which blocks you're keeping, I think it's possible to do better than that, where you don't have to do this linear computation. I think it's very important to go beyond pruned and go to a small range of blocks so that there's a distributed copy of the history. For now we can rely on the existence of non-pruned nodes because it's not so burdensome to do so, but I don't think that's going to be true in 5 years. Right now there's this weird bimodality in cost where there's this question of how new is your computer or how good is it? If it's good, then the costs of running a node are small. If your computer is older or optimized for low power, the costs are high. It's common for computers to have 128 GB drives, but it's costly to run Core. But it's also common for people to have 4 TB drives in which case who cares about the size... as long as we can rely on people with 4 TB to keep all the blocks, then the network will keep running. But once the blockchain is beyond that size, then the number of available archives will drive towards nothing.
What are the incentives to not run a pruned node? Help the network. Offer bandwidth to people. If we had a block explorer interface in Qt, well that would involve an address index or other indexes that we don't want.
Backwards validation, it was a proposal by Tier https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012480.html
01:00 < maaku> kanzure jl2012: i believe that would require committing to the undo log in the block 01:00 < maaku> which is something we should probably consider
If I give you undo data, you can't tell if it's valid or not. I have to give you all the block data, and then you could tell if the undo data is reliable. Undo data requires data from much earlier blocks, which is removed from the UTXO set. I did a simulation of this proposal a while back. There's lots of transactions which had unresolved inputs that went way far back. So you end up with a lot of state that is potentially valid and you don't know. Maybe the dynamics have changed, or my definition of really large might have changed since then. At least what I had simulated is that you keep a whole transaction around until you have gone and found all of its inputs, and then you know if the transaction is valid, and you can propagate forward. The graph had enough fan out so that if you spend a couple recent coins and a really old coin in a transaction, as a result when I simulated it, you basically got back to the early blockchain and it requires all that data. This might have been 2012 that I had goofed around with this, so my definition of "a whole lot" might have changed. There's a bunch of stuff we looked at in 2012 that I thought wasn't viable, and my definition of "improvement" might have changed..d.. "well that's only a 2x improvement". How much time have we spent on 1% improvements, exactly? Our definition has changed over time. The compact block stuff was discussed extensively in 2013, with 200 kb blocks it just wasn't that interesting.
01:03 < maaku> kanzure: sure. but the point is you can't validate subsets of a block without having the inputs 01:04 < maaku> you can either get that with full UTXO commitments, or backref commitments for that block, an optimized-for-this-usecase variant of which is undo block commitments 01:04 < instagibbs> more precisely you can't invalidate subsets without having all the data
The 2x wasn't a precise number on this type of proposal, the issue was that you end up having to keep most of the state around until you're back at the beginning. You'll have higher peak usage, as you resolve all the unresolved threads. 2x improvement on what you have to store can be meaningful if it means taking only 50% of your storage space rather than 100% of your storage space.
When you store to disk, the witness and the flag are required. You need some form of serialization. Sure you could use another form of serialization, but why not use it on the network? The requirements are the same. Are there multiple serializations for a transaction with no witness? No, there's only one. If you see the witness as non-optional data, I give you a transaction with these inputs and these outputs, and this is the witness, there's always one serialization. If the witness is empty, ... you're not allowed to use the extended format when there's no witness.
There's only ambiguity in transactions without inputs. A transaction without inputs will serialize fine, but if you deserialize it, it could be interpreted as one with or without. There's only two cases where this matters, which is serialization of decoderawtransaction and fundrawtransaction, which is the only place where this would ever happen. And the resolution here is that it tries both. And it has to be the right length. Usually when we are deserializing, we don't check if there is garbage, perhaps that's something we should do. First we try without witness, and then with witness is always going to be longer than without witness. If it's something with witness, and you try deserializing without, you'll never read the whole thing even if it was a valid thing with witness. This is not a very rigorous argument, but I think the chance that there is something there that can be valid as both witness and without witness. Yes, please make several cases like that. If you find one, let me know and tell me we need to start over. We could just tell folks not to do that transaction if we find that situation.
It does have an extra byte so that the serialization is compatible. Right now, there's the flag and then there's the dummy. The dummy is to disambiguate from there not being a flag. That's only relevant when you are serializing/deserializing, but when you are storing in a database... but you need to know how to read it. Sure, we could move that flag to CBlockIndex and then this wouldn't be necessary. Which we did do. No. Ah you're right about no.
Before, we had the serialization stuff in bip144. But this makes it necessary to include the serialization stuff, it makes it consensus-critical. Bip141 doesn't make that clear? It makes it clear now. But when we first wrote it, it wasn't-- the serialization wasn't a factor... No, it's always been critical. Max block size has always been defined in terms of ... well, it was ambiguous before, yeah. We do need to specify a serialization, no matter what.
It's only possible if the witness version has zero inputs, I think. The version? It interprets the count. No, if, I'm wrong. Okay, yeah.
The txouts would have to contain the public keys. So when you spend it would reveal this. There's a disincentive to reveal keys, and so all the pruning goes away in this model. Yes, that's just as broken as tree chains. It's incentive-compatible, except for no incentive to reveal keys. I think this distinctly breaks overt soft-forking that take certain implicit forms, but you could explicitly support it.
We could do soft-forks on this, but it requires a soft-fork to change the version number on transactions. You can do soft-forks, if you're put into a position as a receiver where if you're relying on a soft-fork to secure your coins, it becomes much more dubious whether you're receiving money that is real.
The committed transactions thing can reveal the version number on the transaction, and then a soft-fork is required to change the version number. Soft-forks are too useful. You can't take a hard-fork for checklocktimeverify, so you need soft-forks for upgrading the network. Maybe you implement OP_MOXIE and you call it done. Then you freeze it. If you want to make a major overhaul, make a hard-fork. Yes, this requires perfect oversight and no bugs.
There should be an index of satisfied transactions and so on. You have a soft-fork is saying that if you are relying on the fact that as a receiver you are relying on this transaction on the basis that it can't be spent in another way, that looks dubious to you because it could be stolen. So basically the thing where you're already saying, to the extent that... miners can soft-fork committed transactions, we can't take that away from them. Once transaction history is verified by miners, we can't take away their ability to do soft-forks. You can make it so keys are never broadcast amongst each other. Miners can always go and treat something as invalid. Miners would have to block committed transactions. No, only certain types of it. Miners could say we won't go and commit the final bit that commits into consensus. The final commit is the revealing of the key, they can block that. I think you can function without miners revealing the key. For a soft-fork, that can be the final implementation to do, it would just block the key broadcast. How would miners block the key broadcast bit? It's a p2p network. That's basically where they are confirming old history.
You don't want to do that because that gives them a toehold to block things on. Prevent double-spending in encrypted form. Reveal of transaction data can happen arbitrarily later. You as receiver see that you have to verify backwards in history. Assume miners are willing to put into blocks these satisfactions of these expressions. Is that revealing the key? It's revealing the key plus a signature. Expression means UTXO commits to an expression that it needs satisfy to determine its needs are met. But the encrypted txout doesn't know what that is, right?
Imagine every txo is associated with a nonce, when executed on this nonce the first thing returned is considered to be the spend. You're simplifying the idea of executing the blockchain by committing to nonce-sorted dex things, you provide the nonce, and you keep going on? The nonce can be the txid. For all practical purposes, the txid looks like a nonce to anyone else. You commit to the txid. You're doing that in the sense that you execute a script that was derived from that. You want to find the first time it was executed, and validate it. What clauses would you put in there? Anything you want.
Miners can validate that the expression was validly executed. You can have an index of those expressions. As a verify, you can check if it was doubles-pent. That was the first time, it corresponds to the data I wa given, so we're good to go. You can reveal early an unlinkable, and cache that validation. You don't need to publish the transaction itself, just the valid spend. If you commit to the only possibly valid way to spend it, you're good to go. You're revealing the input id, output id, signature. No, you don't need to reveal that at that time. But later, yes. That's what you're committing to. How much of that is validatable before you do a reveal? You don't need to do a validate. It's okay if people stick expressions... It's okay if people satisfy expressions that don't correspond to real transactions. For the tx they care about, were the expressions satisfied? Later on, do full SPV.
Can you spam the blockchain up with... of course you can, you can't get away with it. Here's a true expression, here's a true expression. It's inherently possible, because they don't validate. So definitely storage is inevitable. What's the right model to have these indexes expire in a reasonable way? This is like a STXO index. Spot reservation, knowing when it was created, only check that part of the chain, to the point where I think it was destroyed. This can be decoupled from the transactions themselves.
Every expression is an execution of a small state. To put stuff into history, you have to convince miners to go put in commitments to those transaction validation history expressions. You can do that quickly, later, you can leave them on chain, either way there's no implicitness to when they have to do this.
Blinded public key thing, in the UTXOs. Some kind of commitment to the output. No, probably not any cryptographic blinding, purely in terms of showing that the expression is satisfied, when and who satisfied it first. That's a generic double spending, and generic trusted computation system. You want to prevent double spends, and you also want to validate history, and both of these things look like each other.
You make a commit transaction, inputs are linked but the outputs are also publishing the hash of the public key. You could publish the full public key for the output, the input is a random number. You don't need to publish this, only when it's time to commit to history. Without a signature, it only has to commit to what the outputs will be in the future. That's sufficient. The recipient has to receive a larger and larger transaction-- no, you don't need to do that. When I send you a payment, I will out-of-band send you a bunch of data. If you don't want to do that, you could say arbitrary message layer of encrypted packets. You would compact that so that you publish enough information to the actual chain so that miners can validate it. It's sum of n not factorial where n is the number of transactions, probably O n^2.
By broadcasting the encrypted values, you can give them the key and they can decrypt this. You can do this separately out-of-band. Separate consensus enforcement messaging network. Your scriptsig can refer back to the previous block, decrypt with this key, and run it through the validation function. It's only an argument about it being .. with the way the system works. So you have to make sure the scripts can refer to things in the chain in easy to find ways. We can have this decryption key grab data out of your blockchain cache.
Lightning could be used as an encrypted message bus. Stealth address stuff was originally because Amir wanted to do this over bitmessage, and it's unreliable and you would lose payments. So if you don't want to lose payments, then use consensus-enforced bit message where we know we have data on the other end.
With expression validation stuff, you can do sharding with that scheme. Treechains are possible with fraud proofs if they work. It makes incentive to commit to invalid parts of tree, much lower, because there's much less chance that.... if a lower level takes the fees from a transaction, a higher level can override it and take the fees as well. Higher levels cannot override, because you commit to at what level your transaction is valid. Your txout commits to what level it can be used on. If you make it reorgable, it's hard to pay miners to protect it, because it could be reorged later.
If you say that broadcasting encryption keys is a best-optional thing at most, nothing to do with minres, then what's a valid transaction as a concept is up to the user. Miners have no say anymore. In petertodd's model, that's not true. Miners could do a soft-fork by making it not possible to go and satisfy expressions that prove things weren't double spent. Soft-forks work fine in petertodd's model.
Miners could do voluntarily soft-forks to support multiple versions. Full-node user-defined soft-fork, if basically transactions remain encrypted perpetually, and there's some kind of backwards-compatible protocol change and users doing validation. But then users have to invalidate indefinitely back in history, but then you reveal keys and it compacts things at best-effort. How do users compact things without mining? As you go back in history, how do you determine what was valid? Compacting means throwing away data and assuming things were correct. You could give keys to the SPV / lite client. So you have to download headers and the key stream. But that's not compacting. Only way to do compaction is trusting that someone else validated it. How can you throw away history? Probably O(n) linearization.
If there was a fake transaction in history, how would you detect that after compaction? That tells you the properties of compactions. Mining is a compaction where you trust people with mining power that they validated things. Checkpoints are similar, you have to trust someone there. SPV assurance that a transaction hasn't been double spent, but you don't know if the thing is valid. So validity is still a problem. You basically need miners attesting to that.
You could have a scheme that can tolerate like 25% biased miners. Semi-honest, not actively orphaning anybody. It could work. Bootstrapping problems maybe. Reveal the keys in a timelock encrypted form. You have to say previous states are valid. Make an assertion that it's valid/invalid, it's timelock encrypted, other miners must mine on top of that, before they know whether that was something they would have wanted to mine on.
If a small % of miners validly commit to transaction validation indexes, you can detect double spends by sampling until you hit one of those miners. You will probabilistically hit one of them. Do some timelock encryption scheme might make this system better. In petertodd's scheme, it's valid to create a block with an invalid transaction index, it's okay to build on top of it, we're just assuming 30% of miners are honest. Once you make an assumption like that, it becomes trivial. If you have conflicting information, you're trying to show that a transaction isn't spent. If someone claims it isn't spent, they can't show it. What about a transaction never existing? You have to show previous history, this is an O(n) compactification type scheme. If you had commit UTXO you could do a fraud proof that it doesn't exist, but that's different... proof-of-publication with an index of publications, the property is that you can imagine small percentage of miners but it still works because you do sampling. What do you do if 90% of miners are lying? What kind of lying? Transaction valid when it's not. If the lie is this data has not been published, if you have only 1% honest, you can sample them and you will find the spend preventing someone from double spending. If they incorrectly say it was spent, they can't do that because they don't have the ability to go and provide the key. Want to know there was not a double-spend, I sample all the miners, if I ever hit an honest miner that honestly includes the spend, then I will reject that transaction because it was double spent. I only need a subset of miners are honest. All you need to know is was it published before. It doesn't matter if they all agree/disagree. If 9/10 lie, and say it was never spent, and one that tells you the truth, that's okay. That's like the commit transaction double-spend thing. This is the case of defending against someone trying to double-spend you. The honest miner will tell you the output was already spent.
Disable miner-enforceable soft-forks, then have full-node-only soft-forks. This reduces miners to transaction ordering. Currently they are doing validation assurances for SPV clients.
Extension blocks don't have the properties of a .... extension blocks are a soft-fork as much as switching to an altcoin is a soft-fork. It's censoring transactions. Both blocks are valid. If they are not enforcing transactions, then it's just some extra stuff that the nodes don't care about. It creates centralization pressures, and you're not validating the user's data, and you're slightly SPV. You would be getting OP_TRUE transactions from the extension block stuff. Wallets should probably have better handling of transactions with OP_TRUE in their recent history. You would treat these as riskier, more likely to get reorged out.
Encrypted transactions would take away the information that miners sometimes need for soft-forking. It might be valid for an encrypted transaction to be valid between users, but not to the miner who includes that into a block. Perhaps miners will demand keys or else not mine. As a user, you can force miners to blackmail users by censoring transactions. Could work with timelock. The encryption scheme is structured so that miners can prevent double spending without having to link transactions. As an SPV client, I can get assurance it's not double spent. Then as a SPV client, I get the encryption keys and check validity.
What about only proving authorization of the spend, and no other validity issues? Can we restrict the list of things that you want to validate? Checking that the coin exists, that it's not double spent, we can prove it's not double spent without linking. Proving that the coin exists. Assertions about validity to SPV clients.
If you want a hard-fork, then you should put in committed transactions. Counteracts centralization side effects.
Amounts are correct is another validity issue. Or you could standardize the amounts. Equal number of input and output coins. You could do confidential transactions, without revealing the value, prove that all the input values add p to the output value minus a fee, and it's still not linked (presumably). You could combine CT and committed transactions. You still haven't validated old history. You could prove that the block balances. But that's SNARK territory. It's chalk full of CT. It's a big block because of CT. How do you verify that this was true other than verifying the whole block? How do you compact this? Would CT be aggregatable (ask Greg)?
What if you separate the full-node part, about double spending and so on, to the assertions to SPV nodes? Completely separate structure. Disconnected trust me SPV this is true, that's not like tied into.... If miners didn't validate, it wouldn't be unreasonable for people to pop up and say yeah trust me. That's kind of what people are doing, until they get confirmation. It's not that crazy. When we talk about.. if they can validate recent history, you only have to do this periodically, there's less censorship risk, just verifying true facts.
The ratio of economic full nodes. It's probably the case that transactions are not transmitting between SPV nodes only... to the extent that things are not guaranteed anyway before confirmation, perhaps just say nothing is guaranteed until it lands on a full node. Have a hybrid SPV node that connects to a full-node if you really care.
Instead of SPV clients accepting miner assertion, why don't they accept miner double spend assertion and check with a full-node that it is true, rather than in aggregate? It's a small relaxation. That would solve all problems. This is going in the other direction, SPV liter. This is what petertodd was doing with smart wallet, a colored coins implementation where you could go get attestations that certain states in history were true. Committed to a merkle tree of all the different proofs that were true, once you reach that, if you trust the person making the attestation, you're good to go. Then you use miners to validate recent history for double spends. It gives you flexibility. You haven't given up much. To go back a couple hundred steps in history, a couple kilobytes per step. In this wallet, miners did not understand the coloring, you're removing some of the validation that miners are currently doing, so you end up in a similar model. Wallets are responsible for sending around history.
Bundle committed transactions with a hard-fork, and this different reduced SPV model. Luke disagrees that they implement SPV because they don't even do fraud proofs.
Say you have 5 channels. Yo uwant to be pretty clear that they are actually yours. So you can build network graphs. If you just have a bunch of edges, you can't sort of link them all together into a vertex, it doesn't help. But if you say this edge and this edge are all connected, even if I'm not using the same public key for these 2-of-2 multisig, I must have had a key in common to generate all of these.
What about a proof that you know the difference between the discrete logs between two keys? Here's two keys, there's a private key for both, there's a scalar multiple between them, and you can prove to me that you know the difference between them, which implies you know both the private keys. To get that proof, all you do is take the keys negate one and add them together, then sign for that resulting key. My identity A, my channel C, some point in the middle B, B is equal to Big A minus Big C, or it's equal to G times A-C mod whatever. You prove you know little b by signing. And then you prove this inequality, the receiver yeah just does the point collision. So whoever knows ... The problem before was that given little C and publicly available information you would get all the little Cs. With this, I don't reveal, I just show knowledge of it. This can be used to show that one pubkey was derived, bip32-style from another pubkey, without getting into the specifics of bip32.
So the attacker wants to prove something, so they say, I want to show that this channel key is related to someone else's like A prime. So they can compute a difference that they can sign for, but they can't sign for the channel anymore. They can compute A prime minus the same C, which is B prime. They won't know little A. They won’t know any of the little guys, and if they can then they can derive the private keys. If you know the difference and you know one, you can get the other private key. This is just 64 byte signatures. It's up to you how you generate your keys. You have to have your keys together to compute the difference. If you are doing schnorr 2-of-2 you can sign them separately, you don't actually have to have them, but yes they need to communicate. Tadge is trying to get a proof that two keys are related for Lightning where you want a lightning identity key and a channel key, you want to prove they are owned by the same party, so you do a proof of knowledge of the difference.
It doesn't matter what you sign, maybe the resulting key, the thing is that if you're using ECDH, you need to sign the public key you're signing for to make it a proof-of-knowledge. You sign off on b with b. This proves that A was involved in the creation of C. It doesn't prove the involvement. You can generate them independently. You can do cool things like have some identities on a HSM, but you still have access to both the private keys.
It doesn't prove the same entity has both keys, but that's actually beneficial. You want it so that no external entity can claim a channel. You want the owner of both keys to be involved in creating the proof. What kind of cooperation do you need? Sharing the private key. That's a lot of cooperation. If you can just sign off and random attackers can pretend your channel is mine, my channel is yours, well that you can't do without providing private keys, and you can't provide any stronger protection than that anyway.
BLS can be very interesting and compelling at the same time. The most compelling argument is the weakness in the pairing. Signature aggregation and near-term extensions, for version script. Things like key trees and poly trees. Things are complicated by the fact that this means different things.
The simplest is essentially native multisig, which is the simplest form of signature aggregation, multiple people want to sign and the result is one signature rather than multiple signatures. The second form is cross transaction aggregation, the idea that a transaction would have in the basic case just a single signature left even if there are multiple inputs and all the inputs signatures are aggregated into a single one. The third is cross block signature aggregation, and this requires BLS.
BLS is Boneh Linnschlotem... a signature scheme that relies on different assumptions than just ECC. This would allow us to go to the point where a block just has one signature left. The fourth type is OWAS where you get to the point where not only has a block just a single signature left, but you also can't see which input and which output is correlated. The way Greg described this is that your transactions would just be inputs or just be outputs.
I think there's another axis which is whether they are interactive or non-interactive. In OWAS, you can create transactions with just spending existing inputs existing coins, or just creating outputs. These are two separate transactions and you can use sig aggregation to add together after which they can't be taken apart, and then you can aggregate more pairs together and you can't see correspondence between input sets and output sets. You have a bag of inputs and output transactions and you can't tell which one is from the other.
This will lead to the point where you could say the p2p network is constructing the block. We were talking about signature aggregation. Yesterday someone mentioned block-level aggregation, yeah you can see this as p2p network trying to construct the block and miners just pick the block. Aggregating transactions is now the same thing as constructing a block aside from PoW. No more distinction between transactions and blocks. The data structure would become the same, a bunch of inputs and a bunch of outputs.
You would want to remember the smallest possible parts, so that you can track low fee paying parts. There are various complications like if you have partially overlapping relay, you give me 1 and 2 transactions given into one signature, with 2 and 3 I can't put these together anymore, you can subtract out but only if you have seen it separately. What's the time complexity? Well let's talk about the advantages first. I feel like you just answered my question.
The other interesting advantage of this is that it might tie into long-term incentives in the network. If someone aggregates transactions and gives you them, you can't disintegrate them, so this creates an anti-censorship property. If you stick your transaction with Wikileak's, the miners can't censor your transaction without also censoring theirs, or rather theirs without censoring yours. Censorship is possible but only if you censor everything. And then you create incentives to aggregate with certain folks. There might be some disincentives in aggregate. If someone has aggregated you with Wikileaks, and you're not getting mined, you can give your transaction to the miner and they might mine it anyway on its own. If miners take their fees by adding an output that takes the excess balance thrown into the pot, it means another miner can't reorg that out, and take all the transaction fees. This is super interesting because miners can't grind this out.... I posted this on bitcointalk about 2 years ago. If these are transactions they only learn about from the block, the miners can't separate them out, so that's the positive stuff.
There are some space advantages of all of these schemes. All of these do these. 4 (OWAS) less than 3 (Cross block). One-way aggregate signatures (OWAS). I can aggregate them but not separate them. BLS is Boneh Lin Sh... OWAS is also BLS.
Other than space advantages, none of them directly improve efficiency. The efficiency improvements--- multisig improves efficiency, so the first one does, the second one kind of does (cross transaction). The efficiency improvement of 2 could be done separately by adding batch verification.
Another improvement is privacy, each improves in different ways. Multisig schnorr signatures improve policy by hiding your policy, hides that you're using multisig. The example tomorrow is that BitAwesome releases their new wallet service which uses 3-of-7 multisig and they are the only ones that do 3-of-7 on the network, now you can automatically identify who they are, and this is sort of the problem that Ethereum takes to the extreme... You can do more things that look the same. 2-of-2 looks like 1-of-1, but we can make 3-of-7 make it look like it.
Cross txn and cross block improve privacy by making coinjoins economically advantageous. You save money by making coinjoins. This is an indirect improvement. Specifically, it's different from space in that you get more savings by joining with other people. And finally, OWAS improves privacy to an extreme because it's a non-interactive coinjoin is what it's achieving.
We can also list mining incentives and other incentives.
We should also list downsides. Software complexity. Less so for multisig, it's mostly client software complexity not consensus software complexity, there's basically no consensus rules for achieving basic multisig. Tree signatures fall under 1. For 2, I want to go into a concrete proposal (cross txn).
Loss of accountability is an issue for basic multisig. If you make multisig look like single sig, you can't tell who has signed. This is a privacy/accountability thing, and there's some hybrid schemes but you lose some of the space advantages. If you know all the pubkeys, it's still computationally indistinguishable. Tree sigs preserve accountability and this is included in the first option listed, basic multisig.
New cryptographic assumptions for 3 and 4, like cross block (BLS) and OWAS. These are both based on BLS which are pairing crypto, a generalization of elliptic curve cryptography that academics love and have been thinking about for decades but has never seen any production use with real value.
For advantages, for 3 and 4, we need to list convenience. All of this requires interaction steps between signers, and 3 and 4 actually reduce them. OWAS is an improvement over the status quo because now every node in the network can.... multisig is interactive between clients... you can do one-pass. By multisig in option 1, this is Schnorr multisig, and tree sigs. When you do signing, you have to make 2 trips through the devices. First you have to get nonces, then you sign. First you commit to making a signature, bring the information together, distribute it to everyone, make a partial sig, bring the partial sigs together, then make a signature. But now everyone just signs. I can draft a transaction, sign it, hand it to someone, and you can sign it and broadcast. This is a problem if you want to do things like NFC communication for signing because now it's tap, tap again, tap again. It's only twice. First to get the payment request, send out partial commitment, you can give a nonce for free if you have a good random number generator so it could be two-steps for NFC.
Another disadvantage is computational performance. Pairing operation which is required for each verification in these schemes, every signature you verify needs a pairing operation, and they cost 500 msec each, this is an order of 10x slower than our current signature verification. There's still various verifications being performed, even if there's one signature for the block.... in 1, you don't have this, now you make 3-of-3 look like 1-of-1 there's only a single verification. In the aggregate case it's O n, in the case of pairing the O n is 500 microseconds with number of messages. It's parallelizable, and it's patchable. OWAS is one signature but the computation is still O n, it's number in the pubkeys. That composition is very fast. 500 microseconds per pubkey, an order of magnitude slower than what we have now. This is an extreme computation cost. The state of pairing stuff right now is well-optimized. Boneh thinks it could be even better optimized.
People who have implemented BLS stuff, they have on-the-fly assembly generation and it generates it, figures out your cache size, writes assembly code and compiles it, I guess this was the only way to get it to have any performance that is acceptable down to 500 microseconds. It's a pretty big slug. It's cacheable. As you see parts of an aggregate come in, when a block comes in that uses all of them, you can combine the cache data and do extra work for the missing stuff. The data you have to cache is a 3 kilobit number per signature. Your cache would be enormous, to have a cache entry for every signature in the mempool. Your mempool is going to be much bigger, unless you have a 100% hit-rate. your eviction policy for the cache could be weighted by fee or something. This is not a non-starter, but it's a challenge. In Core, we currently have hit rates at like 90%, the cache is pretty big already and we are space limited and we only have 32 bytes per entry. Increasing that to 3 kilobits, that's a little blah....
The operation to combine your cached entries takes some time, but it's maybe at the speed of sha256 or something, it's Fp addition. And you have to keep up with the network. If you were looking at using this stuff in the network today on the current hardware, like on an ARM device, the answer would be no and t would have trouble keeping up. On a desktop, sure it might keep up. There's some academics focusing on a paper for BLS and bitcoin-like systems. They basically talk about their ratio of CPU time to bandwidth this would have to be for this to be a win. If the ratios are right, then BLS could be a win.
Could this be accelerated on FGPA and hardware support? Probably an ASIC dunno about an FPGA necessarily. But yeah an ASIC. And keep in mind that cross block BLS and OWAS relies on new crypto assumptions. I don't think we have a coherent way of thinking about introducing new crypto assumptions into the public network. People could choose whether to use this stuff... So, zerocash will be heavily dependent on these assumptions and many more. This could be deployed as a sidechain and whatever else.
This is opt-in at signature time... you're pre-committing with your pubkey. It's a different pubkey here. You need to use a different group, a pairing-compatible group with your pubkey. You are stuck in BLS land. I don't know why we lost Pieter. BLS pubkeys are the same size as EC pubkeys, for equivalent belief security. BLS sigs are about half the size of Schnorr sigs, they are 32 bytes.
Cross block BLS and OWAS are same crypto system, the difference is how you make use of it in the network. We would not implement cross block BLS, we would want to do OWAS. OWAS is where you create transactions with no outputs or transactions with no inputs. You would be aggregating them. It's a transaction format thing. One difference though is that OWAS would require stunts to do that without a hard-fork.
Implementing OWAS... well, implementing the consensus rules would be easy. You care about both inputs and outputs. You would need packaged relay. The important thing for OWAS, you could implement that without relay logic implemented. It would work just like cross block BLS. There's a way to roll it out in a straightforward way. We would probably not do cross block BLS, we would do OWAS instead. It's a transaction format question. More complex to do a soft-forkable version. To do OWAS as a soft-fork might be quite complicated. As a hard-fork it's straightforward. It's on the level of CT as a soft-fork. It's not impossible. There's probably ways to do it. CT as a soft-fork would probably be harder.
So that's the BLS stuff. I didn't talk about Schnorr stuff. With OWAS, you move the risk of losing funds to ..d.. we can't ever do things in one step except to force every wallet to send to a new address. Given a 1 megabyte size, you could have 10,000+ signature verifications that you have to do for that block. At 500 microseconds each. So that's seconds. That's why I said this is a problem on like low-end hardware like raspberry pi, they couldn't keep up with this, and on a fast desktop sure but we have to solve relay, weak blocks and caching really well. It's on the verge of viability. Maybe one more cycle of Moore's law if it holds out that long.
OWAS + CT that will be too much, pushing it too much. CT is much faster. OWAS + CT would be... well it would be zerocash but with much better scalability. Excited about the future of BLS and OWAS stuff. Someone in Boneh's group is looking into making this viable for Bitcoin-like systems. Alex L maybe.
Concrete proposal for Cross Txn, every transaction ends up with a single signature even if it has 100 inputs, even if it's coinjoin especially if it's coinjoin. This is a specific advantage. Even with a coinjoin, it would have a single signature which means you get a fee advantage from doing it with coinjoin. It's interactive. Coinjoin is also interactive. You have to do BLS stuff to get non-interactive. It's one extra round over coinjoin.
OP_CHECKSIGSCHNORR and you're done with basic multisig on that list? The thing we can leverage from Schnorr. I'll go specifically about that, and then how you apply that in 1 and 2. You can jointly construct a single transaction which is adding multiple people sign the same thing, add the sigs together, the result is now a signature that is valid for the sum of their pubkeys. If I want to send a 2-of-2 multisig to you two, you both give a public key, we add the pubkeys together, now everyone who wants to send money to them you send to the sum of the pubkeys, the only way to sign is if they both sign and add their signatures together since nobody has them. And the sender has no idea that they are using multisig. It's naive multisig, just add signatures together.
They need to jointly... there's an extra round because they have to jointly construct a nonce. Nobody actually knows the nonce you're signing with. You sum your nonces and that's the nonce for the transaction. Knowing your nonce, you form your signature and sum the signature. If you knew the nonce used for the signature, you could determine the private key, but neither of you know the nonce... in 2-of-2, why not subtract your nonce? The resulting ... these aren't nonces, they are keys. You need the sum of the pubkeys. In this case, this is the simplest case, like 2-of-2 or 3-of-3, it looks like a single sig and a single pubkey on the network. And the CPU cost is 1 verification. It's basically single verification. The network cannot tell that it is indistinguishable from a single pubkey.
Threshold sigs for ECDSA achieve the same end using a really complicated procedure in the signing which has like 6 communication rounds and requires poly N and it's really computationally expensive and requires probably 20,000 extra lines of code and a completely different crypto system.... But on the network it looks like ECDSA, people could be using that today and we wouldn't know that today, and as far as we know nobody is doing that. I wanted to do that in the past but it was a lot of code to write.
Say we wanted to do 3-of-3 schnorr multisig, get efficiency, privacy, and size advantages of this. There's a problem. The problem is say we do 2-of-2, both of you, you say my key is A, this is your actual key, and your actual key is B, but what you announce is B minus A. Now we add them together and everyone thinks to pay to you 2 the sum is A and B-A and unknowingly we just end up with B which is Mark's key and now Mark can sign for the both of them, and the other person cannot, he doesn't even know he can't. Effectively both you together are unable to sign, except by just Mark doing it... This is bad. This is annoying. This is also a problem you would have ended up if you had read academic crypto papers on this, none of which mention this problem. And usually it's not mentioned because there are external procedures that assert which key belongs to who. In theory, this is the case because you're still getting this address from somewhere, and it's something you trust and it's not unreasonable to expect that you trust it to do due diligence that both of you have your key, which is easy that you sign your own key and you're unable to do so if you have done this trick. So one way to address this is to require an interaction, pushing the complexity to the assumptions under which the scheme is valid. In bitcoin we don't like that, everyone implements their own crypto and they will get this wrong.
There is a solution that I came up with a couple months ago, which is you multiply everyone's key by the hash of their own public key. Before signing, you multiply your private key with the hash of your public key, you multiply your private key with the hash of your public key, to add our public keys together, we do this preprocessing step which doesn't otherwise influence the schnorr procedure, but we can prove that if you know his key you cannot come up with any key ... A*H(A), Mark times to come up with an X*H(X) such that X is something for which he knows K and this is impossible. If he can do this, he can break Schnorr itself. There's a black box reduction to this to making forged signatures. If you have only seen A, and if you can pick an X and a k such that this equation holds, then you can use this mechanism to break Schnorr which we figure is highly improbable, and computationally infeasible, otherwise we wouldn't be doing this. This is sufficient for the multisig case. It's not a full solution.
This does not require special interaction, the keys could be had from a database. This applies to multisig. We could do this. We can create an opcode like OP_CHECKMULTISIGSCHNORR, it takes as input different public keys that are pre-multiplied. The script logic, to do multisig, doesn't need to know about this blinding scheme, we just do it before it ends up on the blockchain. It's just OP_CEHCKSIGSCHNORR or something. It could take 1, or it could be an OP_CHECKMULTISIGSCHNORR which takes a number of public keys, then a bitmask of which one ssigned, then you have a k-of-m constraint on it. It takes the public keys, add them together, then do Schnorr verification which is really fast, which is essentially zero overhead. the point addition is basically free, it's 60x faster than signature verification. About 2 orders of magnitude.
Wasn't there a better scheme than bitmask for more complex things? That's the key tree signature stuff. It has partial overlap. In some cases, one is better. We have some other schemes that build on top of htis, like getting accountability, or poly sig which allows signing with k-of-n where k is very close to n. But let's not talk about that. For really large like 999-of-1000 and you do it in size O ten... signature is in the size of the... anyway that's out of scope here.
We implement checksigschnorr, multisigschnorr and everything up to here is as far as we know is solved. Which gets us, and it's not just perspective here, we have a schnorr signature in libsecp256k1, it's used in elements alpha, the specific formulation we're using we would like to change since we got smarter over time, but most of the work for that is basically done. We would write a paper about it before implementing it.
Now I want to talk about signature aggregation across different inputs. Here, another problem arises. This proof that we have,... well first state the problem. What are we doing? What is aggregation? So we have a UTXO set with a bunch of public keys. We have a UTXO set in the network today, there's a bunch of coins, Alice has 10 different TXouts, she would like to spend all of them at once in a single transaction and would like to do so in the simplest way possible. Even if you assume the 2 inputs belong to the same person. So we don't have the complexity of interaction, we have public key 1 p1 and public key 2 p2, which are the public keys, those are Alice's TXouts, you could write a transaction that spends those out. There's a single transaction with two inputs and two outputs. There's input1, input2, output1, output2. Only one of these would contain a signature. The way we achieve this is by applying the same technique as multisig. You gather the txouts from the blockchain, you would provide their public keys, the transaction has access to the pubkeys, you would ask the network to sum the public keys, then you would provide a signature for the sum of the public keys. It's like multisig. Then you do some layer violation breaking where the way we've been thinking about it, each of the inputs would do some setup where they say you're going to sign with that pubkey. It would be a checksig with a zero length signature in all of those. It does not interact with different sig hash. It's basically a SIGHASH_ALL in those cases.
How I would think you do it is with transaction validation, not just input validation, because we're clearly doing something at the transaction level that we were not doing before. For transaction validation, you would have an accumulator, a stack of verifications to perform. Every time you see a CHECKSIG, you would say yep it's succeeded, even in the normal script verification it succeeds automatically but as a side effect it would push the message pub key pair that it would be validating on to the stack. At the end, you look at the stack, after validation, it would have one signature and a bunch of message pubkey pairs, it would add the pubkey pairs together, add all the messages together and hash them into a single message, and verify that the sum of these public keys is a valid combined with that signature is valid sig for this [hashed?] message. It would be an OP_CHECKSIGVERIFY, or we could have an OP_CHECKSIG, it would be an assertion as to whether the checksig is supposed to pass or fail, you would pass a bool for expectation.... you have n checksigs? If you have n checksigs that are executed in a way that we assume they are valid. You have the success ones and the fail ones. The fail ones you throw around. Instead of a signature there, you provide a 0 or something, it would be ignored and execution flow would continue.
How does this interact with sighashes? Clearly, in our signature, we're now signing the hash of all messages which you can only do if you know all the messages. If you want to do SIGHASH_SINGLE, and you don't know the other inputs yet, there's no way you can sign for the other inputs. Imagine it's n inputs. Could you have one SIGHASH_SINGLE and one SIGHASH_ALL? And the ALL is the aggregated thing? A way to do this, because the scheme we've explained so far does not address this, it can only do SIGHASHALL so far. In a more generic way, sometimes you may have a part of the signers that cannot interact with some of the other signers. This is the inherent problem you are trying to address with sighash types. In this scheme, you can solve this by having multiple accumulators, the signature would contain a number saying I belong to group 1, group 2, or I belong to group 3, and for each group you would require exactly one signature, and 1 through n pubkey message pairs. There are interesting malleability ... any time you have sighash flags, you get interesting malleability issues.
If we need to implement the aggregation, it's just as hard to have multiple accumulators rather than just one, the amount of code is basically the same. However, now we come to the real problem of the linearization scheme from before, multiplying with the hash of the keys is not sufficient. There's an attack where, there's multiple ways it's insufficient. If you were doing the precomputing thing where you expect your users to delinearize their keys... well the attackers could choose not to do that. Maybe you take p1 from Alice, but p2 is from Bob which is the negative of Alice's key, so maybe it just didn't occur for Bob p2 - the delinearization didn't occur there maybe. The attacker can add an extra input, use the pubkey as a negation of the others, and then be able to spend everything. So the verifier has to do the delinearization in the network. It has to compute the hashes from the pubkeys, and it has to multiply them, to do the delinearization. That's why the performance is worse than batched schnorr validation, because the verifier has to take every pubkey, multiply it by some hash value, and some the results together.
You have 10 inputs, each of which are doing basic multisig, and then comparing that to everyone in the inputs doing cross transaction. It's faster, but not a basic multisig speed. It's in between. nominally it's the same amount of work that you are now doing per verification, even though it's a single signature. However, for free you get batch validation and this makes it faster again. Batch validation is something we could in theory do independently by why bother if we can get all of this at the same time.
The simple linearization scheme from before is actually insufficient in this case, and you may have not heard about this because we only learned about this recently. The attacker can pretend to be multiple people at once. He cannot come up with a single key which cancels out everyone else. But if he be 100s at the same time, there's a way to speed it up which is likely sufficient to break it. You have a key in the UTXO set that is you know 100 bitcoin coins or whatever, and someone goes and makes 100 1 satoshi outputs and then they are able to spend the aggregates of your coins and the 100 1 satoshi outputs, because the mixture of the pubkeys ,even with the delinearization key is still able to cancel out your keys. It would generate 100s of possible things to spend with, gazillions or such, and find some subsets of them that cancel out existing pubkeys, and this is likely enough to make the security ungood... you could do this with a feasible number of outputs, there's guaranteed to be ... guaranteed to be a solution. It's a linear combination? It's an unlinear combination because of the hashes in there. Finding it would be computationally feasible? It's logarithmic in time. Wagner's algorithm that is able to solve problems of this kind, efficiently.
You give me a 256 bit number, and you give me, and you give me 256 randomly generated other numbers. I am in log n time, so order 256 times elemental operations on these numbers, find a subset of those 256 whose sum is the first one I gave. And the way it works is you sort them all, all the numbers, you subtract two top ones, and now you have a combination of two that results in a number that is slightly smaller, and you keep doing this until all of them are 255 bits left. And now you take combinations of 255 and subtract them from each other, and I'm guaranteed to end up with combinations that are only 254 bits, and I keep doing this, and at each step I gain one bit. The list grows exponentially, but there's more efficient ways to do this without exponential blowup in space. What you would do is before creating those inputs, you would create thousands and thousands, and you would throw most of them away, and then you would put some of these inputs into the blockchain, and then you could steal the coins. So the old scheme is hash(A) times A, plus hash(B) times B, but instead you first compute S = H(A || B || C ||D ) and then you add H(A||S)) and then you take.. the hash commits to all of them, now you can't have an algorithm that makes progress. When you want to add an extra input, then the whole delinearization is done again and their previous work is thrown away. It's just hashing. You cannot precompute anymore, you can't cache anymore. We believe, we don't have proof, we believe that this is secure. We have not yet got a security proof for this one. It was found while we were trying to find a security proof for the other one. Pieter was unable to find a security proof, so Adam and I said let's assume it's broken, how do we exploit it, then we found Wagner's algorithm, which is not widely known but pretty cool. It's modular subset sum.
A very specific question is, it's likely that we need something like this of this complexity for the signature aggregation scheme, but not for multisig case. In the multisig case, the delinearization scheme with pre-delinearized pubkeys is sufficient and much easier. Are we going to go to the trouble of doing both? If we do multisig with this scheme, it becomes less efficient because now the delinearization has to be done on chain rather than before it hits the chain. It's not unreasonable to do, but one costs more than the other or something, it can be within, there's just an opcode to do multiple checks and it uses the simple scheme but on top of that it uses the more complex schemes where you do it across inputs. But maybe in some cases you, the point is you can do this recursively. And now what if the keys that end up in the blockchain are the result of already some aggregation scheme that is more complicated? I would feel more comfortable if we know that the construction was the one we knew was safe. It's a very significant performance cost to the multisig.
You can use this for multisig but the problem, in the simple scheme, we can say that the pubkeys that go into the script are already pre-delinearized. We already know they are legit and conforming to that. Yes, if this was done off-chain, you can use this scheme off-chain too. But Pieter was talking about doing it on-chain for multisig, you don't have to, but it's safer that way, because that way the user didn't have the opportunity to screw it up.
We've only talked about m-of-m where all of the keys sign. Say we want to do 2-of-3. Now there are 3 different possibilities. What we can do is use this more complex scheme, do it on the 3 different subsets, have a script that says sign with this pubkey, this pubkey or this pubkey, and there's a single signature, and 3 aggregated delinearized keys on the chain. But you can use your tree signature scheme or MAST? Yes you can do that on top. This becomes infeasible when we're talking about 10-of-20, maybe not enough. I don't know the numbers exactly. For some size of number of public keys, this becomes impossible to do because it requires to create the address that you iterate over all combinations that are all of them pre-aggregated. And this easily becomes infeasible, it's not very hard to come up with cases where you need 1 year of CPU time to make the address. 3-of-2000 is still feasible, 50-of-100 is doable. But 6-of-1000 is not doable, because that's 2^64 work to just compute the address. However, if we have a native multisig opcode, where you pass some pubkeys and it adds them together at verification time, you have 1 signature, 60 pubkeys, a 60 wide bitmask with 3 bits set. And the opcode itself would do the aggregation. And now we can say, well, if this uses the simple scheme, this is super fast, still as fast as a single verification, whereas using the more complex delinearization, it needs to do EC multiplication for each of the keys and it's as slow as a batch verification of 3 signatures. For cases where tree sigs cannot be used, it would be more efficient to use a simple delinearization scheme with pubkeys that are pre-delinearized, and that is incompatible with more complex scheme which is secure in more cases. If we're talking about aiming for the best efficiency, we probably need to do both.
In case of having to do with this delinearization from every node, how much faster is it to just validate multiple signatures? So you shave off 2-3x with maybe some other optimizations. It's better. But not having to do with the delinearization in the network, is way faster. It's basically O(1). If you have something with a thousand signatures, it goes to 1 instead of 500ish. Maybe it goes to 2 or 3 or 4, whatever, for the additions. In the simple case, every additional signature adds the cost of 1/100th while in the more complex scheme it costs 1/2 or 1/3rd.
None of this is necessary in BLS. Does the Wagner attack impact polysig? No, it doesn't. You have to delinearize polysig too, but it does have to be delinearized in the network. Polysig already pays the cost of big multiplications all the time. You can't delinearize polysig in the network because you don't have the data. Polysig is a scheme that does 9-of-10 multisig with size 1.
We have three schemes that start with Schnorr multisig, they start with Schnorr signatures and get you better multisig. They are better in different ways. The things they are better at is accountability to figure out who signed, and basically, and maybe some improvements in non-interactiveness in terms of key setup for thresholds. The 3 schemes are one that Pieter mentioned which was Schnorr check multisig, where you give the set of public keys in, and a bitmask of which ones which will be signing which also goes in, the verifier sums up the pubkeys, maybe does delinearization and then provides one signature. This is advantageous in that it's accountable, anyone can see the keys. The use case, here, is that I want to setup a honeypot, I have 1000 machines ,I want to know when one of them is broken into. I have 1 bitcoin that I don't mind losing, I can only put 1 millibit on each machine, but if I could have a 1-of-1000 multisig, I could give each machine the same 1 bitcoin, but it's only useful if I can see afterwards which machine was broken into. There's also other reasons for accountability like legal reasons. This scheme gets you accountability, but it's size O(n) in the number of pubkeys, you don't want to do the 1000 pubkey case in it. It's efficient to compute because you add up the keys especially when the number of keys is low. That's one scheme, it's obvious to do, is no brainer. The checkmultsigi thing also, if you want to do a threshold thing, like 2-of-3, each of you have pubkeys and you want to do 2-of-3, if we try to do the O(1) Schnorr, that works for combined keys n-of-n but not really well for threshold. To do a threshold, you have to do an interactive process where you end up computing differences between keys and sharing them amongst each other, it's not n squared it's worse. It's exponential in the number of... but for actual normal 10 of 20 or whatever, network protocol can do that in a couple seconds. Also it's completely insecure in the presence of key reuse. Nobody does that (laughter). We don't think people will do the O(1) things because you lose accountability, and you need complex interactive thing, and with checkmultisig scheme you get accountability, you get space savings, no interaction to setup the keys, interaction to sign, but not for 1-of-1000 because you need to put 1000 pubkeys in the chain.
The second is key tree signature. You can take n-of-m threshold and decompose it into the satisfying n-of-n key. If you have n-of-m, there is a set of n-of-ms that will satisfy n-of-n. Say you want to do a 2-of-4, there are 4 combinations, AB AC AD BC BD CD, there's 6 combinations. I can effectively turn it into 1-of-6 by expanding the combinations. Now I use this scheme of making a merkle tree out of them. My script pubkey is the root of the merkle tree. It's similar to MAST stuff. It's key-only MAST. You can do this in bitcoin script if you had a concatenator operator or a OP_CHECKMERKLEPROOF or OP_CAT. Pieter implemented OP_CAT to build and verify a merkle tree in a transaction in Elements Alpha. It requires 6 bytes in the script per level in the tree, to validate the merkle tree, plus 32 bytes in the signature to reveal the merkle path. What goes in the pubkey is the root, what goes in the sig is the merkle path connecting to the roots, plus the combined public key, and a simple checksig. It's quite efficient. It has a linear property. You get this combinatoric blowup, but you put it in a log. So the size of it is always efficient. The size is always efficient, it's just, this tree might be a billion ..... you could do it with log space but not log time. You can verify in log time. The signature is log space, but signing is ... the verify and stuff, it's 50-of-100, you could never compute the public key.
There's a third scheme, which is polysig. It's using polynomials and a polynomial matrix. I'll give a simple case of say you want to do a 9-of-10 threshold multisig. This is making productive use of the "I can cancel out your public key" thing vulnerability into something useful. Say 9 people want to sign, 10th person is not there. Let me restrict this to 5... so it's 4-of-5 in this example now. Let's call this S1, the other one is A+2B+3C+4D+5E and this is >= S2. And now what I say is ... this is a system of equations, and you're, we're putting, we've computed, we compute S1 which is the sum of the pubkeys, and S2 is the sum of the pubkeys time this linear ramp here 1 2 3 4 5... So we compute in the verifier S2 - RS7..... D cancels out, 4D minus 4D becomes 0. And we've used this cancellation to our advantage because we don't need.. anymore. Oh it's the other way around. S2-4S7.. it would be minus 3A, yeah. You tell the network, basically, which ones you're leaving out. The network computes this combination, you sign this combination that the network computes. This generalizes. You can do 1 plus 2 squared plus 3 squared, one more equation for each one you want to cancel out. As long as there are keys, which are the sums of the higher power, it's just a vander boid matrix right.
You can build a merkle tree where you put S1 in there. What would be the second level? Squared, cubed, with 2 3 4, it's the coefficients, just your indexes. This is the sum of tyour square, this is the sum of your third powers, and this is the sum of your fourth powers. I only construct this tree up to as far as I want. I allow up to 4 people to not sign, so at the 5th level I just put garbage, I invent a random number and I build this tree. This is what goes into the scriptpubkey, the roots. When everybody is present, we compute the single signature, we verify it just in S1 which is the sum of the public keys. If one person is not present, we reveal this sum allowing you to reveal the merkle path from this node up, and do a signature where two people are missing. So this has two advantages. One is that it simplifies in terms of space, and signing time, and verification time to a single signature when everyone is present. If you've built some 90-of-100 thing, but everyone is online, you just produce the O(1) signature that comes out. This random number here means that nobody in the network knows how many up to how you have allowed to not be there. The only thing you have revealed is how many people were not present at signing time. You can tell who was not there, so your accountability is perfect, even if you have leaked very little about your private information in this scheme.
There's some optimizations that you can do to make this easier. Instead of using like 1 2 3 4 squared squared squared, you can use the modular inverse of the numbers in the generation time so that the verifier doesn't have to compute the inverses in the verifier. It's fast. So this is efficient when the number of missing keys is small in absolute terms. It's inefficient if you are going to do 1-of-1000 and like 990 are missing. That's efficient under some of the other schemes, right, so we already have that. Polysigs has this cool property where if they were all available, that's efficient. In any scheme, you could do a MAST scheme where you have a checkmultisg on the left branch. It's a recursive P2SH basically (MAST). We don't have an implementation of polysig yet, and jl2012 does have an implementation of MAST.
This is like applying MAST concepts plus we're using the tree thing here plus the cancellation malicious thing turned into something useful. There are use cases where this is cool. I don't know how much it actually matters, do people really want to do 90-of-100 multisig. Maybe they want to do 8-of-10... what are the use cases for... we would use this for like, Liquid is like the sidechain transaction system where there's a federation of somewhat trusted parties that sign for the transactions and most of the time they are all online. So the only time when we would often have thresholds like 2/3rds or higher in any case, regardless of our threshold, they are all online and we would be able to sign with O(1) signatures. We're not using polysig now, we care about the size of the signatures so that we can release this on bitcoin transactions. The federated peg is where this really matters too.
Any time you have greater than majority multisig threshold with larger than usual participants, this polysig scheme is uniquely attractive. The difference of revealing 64 bytes of data versus revealing 15 pubkeys, that's a pretty big difference too. You can have 50-of-100 with polysig, it's going to be size 50, it wouldn't be much better than using a key tree. And, so one thing, you can't compute a key tree with 50... it wouldn't be any better than the CHECKMULTISIG case. If it's 50 of 100, and you only have 50 available, then you use a checkmultisig.
In theory, if your policy was something like 10-of-1000 well you don't want to do that with polysig, so when your top-level aggregation scheme is a complex delinearization, you can put anything in it, you could have all kinds of opcodes that result in some verification with a pubkey and a message and all of them get delegated to the last -- all of these schemes have various ways of determining what pubkey to sign with, or what pubkeys to sign with. And you can aggregate all of it under a delinearization scheme that makes it composability-safe. Like putting it behind MAST branches.
Anywhere you have a functionary design pattern, this would apply, like sidechain functionaries, oracles, federation of oracles, and truthcoin, most of the verification people are online and so they benefit from a polysig scheme.
With signature aggregation across txins, you would require the complex delinearization. And that same technology can, you can put various means into it for determining what to sign with. And under the assumption that it's safe for signature aggregation, it's probably safe for any other use case which is a nice fuzzy feeling. One thing that I think you didn't go into, the concrete statement about what the signature aggregation would look like, but the idea that we were thinking about was an OP_CHECKSIGAGGVERIFY where you would normally input a public key and an empty object for the signature. And potentially an id number. The group number. We could say, use bip9 again to do a new soft-fork which is Schnorr, simplest thing possible, which, replaces even, we could say it replaces the existing checksig. Probably we would do it as a separate opcode, but just think of it replacing it for now. So checksig is an operator that currently takes a pubkey and a signature, but instead here it would take a pubkey and a signature and now the signature would be an ECDSA signature plus a sighash type. But it would be turned into either a sighash type and a group number or a sighash type, group number and a schnorr signature. And per group number there would be one signature in the entire transaction that provides an actual signature ,and all the rest would be just a hash type and a group number. All would automatically succeed and return true except when you pass in an empty signature. At the end, the transaction could fail. At the end there would be a second stage, not script validation, but transaction validation that would look at these aggregators that were built, but would do the magic with delinearization and batch validation that would verify all of them. Implementation-wise, this is probably not hard. For segwit, we're essentially already adding a per-transaction validation level structure which is the sighash cache which is not currently in the pull request but we'll submit that soon. Oh yeah, tadge implemented that in his btcd implementation. It works, it verifies everything. We haven't tried mining with it. And this is essentially the same data structure for caching per-transaction sighashes, could be the same data structure where we store all the message pubkey pairs and the aggregate signatures before they can get validated. And on top of that, we could do the bitmask thing easily as a bitmask multisig opcode. But all it does is provide another mechanism of pushing things into this stack, it would not be a separate, uh, every other signature scheme added are just things that push on to this stack and in the end, uh, where the complexity comes in is at signing because we don't have a nice scheme anymore where various people provide a signature and at the end you concatenate and put into a scriptsig. You are generating way more information during the interaction round for signature, and at the end collapsing it into a single signature. The reason and combining logic would be.
For a single wallet with a bunch of utxo and different keys, it's all message passing within the program. But it's still more complex. It's more complex compared to theoretically. But the signing code in Bitcoin Core is moronic and way more complex than it could be. Maybe. We haven't tried the other way. So you need a security proof for the delinearization scheme? Yes. We need that before we deploy it. It should be a new opcode. At minimum it would be new because it would be in a new segwit script version. It's not like anyone else is going to get this imposed on them. That's not an acceptable answer, "oh well you chose to use this cracked-assed thing, what do you mean this new script version we introduced was utterly insecure, you should have verified the delinearization proof yourself" yeah so we need to publish on the delinearization scheme and get some peer review. We also should implement it too, these things can happen in parallel. I'm sure that as we implement it we will have opinions about the construction that might change the proposed design slightly. We need to implement a high-speed verifier for this, because the delinearization scheme should be fused with the verification because this will asymptotically double the performance. So we need to write a verifier that does the fused verification. Would you do a number of separate groups, do batch validation across all of them with per group delinearization, this can all be done in a single operation with enough pre-processing. It's just engineering. It will be in libsecp256k1, it will be some module in libsecp, some aggregate signature module, with non-malleable 64-byte signatures.
Non-malleable sounds good. Provably non-malleable. Fixed length is also good. No DER. We struggled on trying to come up with a security proof for this, so we probably need to have a different proof approach. We should talk with the Stanford people. Academic crypto is often BLS land and things like that. The Schnorr aggregation is kind of boring to those folks.
[libsecp256k1] This is now used in the last release of Bitcoin Core. There was quite a bit of original research both in optimizing it and in verifying it. I think we've used novel techniques for both. Essentially these are not written about anywhere except in comments in source code maybe, which is not a good situation to be in. We need to write a paper about this. We've been talking for a long time about this. There's many things to do and not a lot of time.
The MAST proposal is quite straightforward. It's quite similar to the current P2WSH scheme. You need to provide when you are spending the coin, you need to provide the script and the merkle branch and also the position. You use ECDSA validation to calculate the root, and compare with the scriptpubkey. It builds a tree where the leaves are scripts. And then you say, this is the script I'm executing, here's a merkle branch that proves it was committed to. This is not the most generic form for MAST. You can in the internodes in a MAST also put scripts that are combined. It doesn't actually matter, it's probably equivalent. Okay forget what I was saying. You just have to put OR behavior in the leaf itself. That's the only difference.
You could use OP_EVAL take a segwit script version and the actual script to execute. You have a new opcode, OP_EVAL, and it's able to evaluate any segwit script version. You could just use this, if key tree was done as a script version, you could do it as one opcode as well as if you have... and you would have a, yeah. You run into challenges with that, you'll first run into the script size limits. But then, also, you basically have a recursive structure so now you have limit and memory management tracking that you need to do as you recurse through the levels. As you finish the eval you need to pop back up.
The advantages of a merkelized abstract syntax tree, I don't know if there's been so much, there's obvious advantages when you have very complicated scripts when your leaves are very long and you don't want to publish them. This is the obvious case. There's much more to it. In the same sort of reasoning, if you see this blockchain as a settlement mechanism that will verify things when you ask it to, say there are a few people and they construct a complex set of rules under which outputs can be spent. This is a large script that would have to go into the blockchain. They know they can do so, so what if you take the complex script and just say "or everyone agrees" and it becomes a simple multisig of all the participants. It could be polysig or simple schnorr multisig, you may not want people to not sign in this, the requirement might be that everyone has to agree. In the simplest case, it's a multisig of everyone signs. Simple multisig of everyone, OR complex script. In almost every case, everyone will just agree because they know they cannot gain by not agreeing, because if they don't then the big script will be published and the blockchain verification will grant whatever rights they will do, in the same way that the court is effective by threat-of-court. That's the mental model that is useful for this.
There are more reasons for that. If you now think about the signing case, we're writing a complex contract, now I need to write signing code, someone needs to construct the signatures for this. Pretty much all the signing code I've seen in Bitcoin applications is, we template match whether it's something we know, and then we have a separate implementation for every type of signing. This does not scale for complex contracts. The way you do this is by composing multiple contracts. You might have a 2-of-3 contract, but you are already doing 2-of-2 multisig. Now I am combining 2-of-2 with you, with 2-of-3 with them, and we're putting this into a complex contract that allows 8-of-10 majority vote from some executives to overrule you or something, now try to write something to actually implement signing code for this. Templates. It wont work. The way to do it is, because, if you did not have MAST, if you were using something in the style of the current opcode-based scripting language, you really have no choice but templating. It's very hard to reason about an imperative program what are the inputs I can construct to make this program evaluate to true? In MAST is so much more structured you can, by the way, take the merklization further than just as the top, you have a bunch of ORs at the tops, and beneath you can have ANDs that don't need merklization but they can still be ASTs and just having a script language structured as a syntax tree rather than an imperative program allows much easier reasoning that leads to simple signing code. The goal should be here to aiming in the long term for a programming language for writing these scripts where it's true that if I have a script and another script and I know how to sign either of them, I can sign for the OR of them, and if I know how to sign for the both of them I can sign the AND without having to template every combination. Or I can sign the pieces and hand out to others, where it's delegation, like signing over rights under some conditions to someone else. So you take this AND/OR construct, you sign everything you know how to do via template matching at the leaves, and give this to someone else and they can sign, and eventually you satisfy the admissions policy here, like what's admissible. And this is not limited to Bitcoin scripting. It would be nice if ssh or gpg allowed signing with multisig or complex generalizations of it. That's something we've been talking about in rebooting web-of-trust thing for example, there's been some work there to leverage this idea of what we call Bitcoin script is essentially something like a "smart signature" or "programmable signature" and it would be nice to have a sort of standard that exists outside of Bitcoin. Maybe in Bitcoin it's not the same implementation that we want, because Bitcoin has very peculiar design constraints like efficiency, verification, privacy, signature compaction, consensus reasons, etc.
petertodd is being paid to implement something like that. It's not an interesting topic, it's too practical. It's playing computer science, this is engineering. There's an area of cryptography called attribute-based encryption that has a lot of similarities to this sort of thing. You have some complex program that decides whether someone should be able to decrypt a message and there are ridiculously complex encryption schemes where you can encrypt this data with this program as the pubkey, and to decrypt this, the person has to provide inputs that this program would accept and only if they can do that would the program decrypt. It would be like an authority that signs off you have this gender, this age, your nationality, whatever, now you only want people who can prove that at least one of them is at least 18 and one of them is a citizen fo the US, that's something that this area of academia is able to do. This stuff starts the building block of fully homomorphic encryption. It's not remotely realistic technology, but it's the closest spiritual relationship to this. The absolute extreme of programmable signatures is SNARKs where I now have an arbitrary program and I prove that I run the program with secret inputs.
Also with these languages, if you actually make them work well enough, you go out write your consensus protocol definition in the language itself. So if you run it against itself, it returns true. One of the folks at, Russell O'Connor found some of the early vulnerabilities in bitcoin, he discovered compressed public keys, coinbase overwriting 30 and 34, a haskell implementation of the program, he invented MASTs, I can't remember why he called it midas money. He's working on smart contract stuff like language design things.
After segwit activates whenever, what do you think the, what are the priorities for the next version of script? Schnorr in some way should be in there somewhere. Maybe just schnorr by itself. Maybe it's just the very basic, with the bitmask feature. Maybe the signature aggregation I don't know, it may depend on us getting the security proof and review and everything. And an OP_CHECKMERKLETREE or we just re-enable OP_CAT, it's 6 bytes per level, you don't need a merkle tree opcode. You hit execution limits and you limit to 2^31 levels. You can do 2 billion. That's for key tree. With lightning, doing 2 or 3 key trees. So far it's just 2-of-2, if they are schnorr, they are doing the O(1) schnorr for that. Unless one, the thing is that one of the policies might have a policy that is a key tree. Yeah there's lots of stuff you could support. If you want scripts that have CSV or escapes or things like that for each transaction.
Definitely basic Schnorr with CHECKMULTISIG thing, and address scheme. Would the address scheme be for Schnorr keys only? We would make it generic for all witness. We really must. When we do another address scheme, it should be one that is generic for segwit with multiple segwit version numbers, even non-invented ones. Maybe subject to some size limit, like if a future segwit version number needs larger scriptpubkeys that don't fit in the size maybe then a new address type. Part of the goal of P2SH was to not need a new address type in the future, and we failed. We could do segwit under it..... It's not a complete failure, but not as nice as we wanted it.
When using Schnorr, when using P2WPKH format with 20 bytes, yes you are still exposed to the.. if it's actually a multisig, yes. I believe the 160 bit collision attack applies. Ah, okay. It would be a reason to, you can let it up to the people involved. I mean right now with P2SH multisig is vulnerable to this, it's more vulnerable in that you're given a bunch of freedom, it doesn't have to be pubkeys, it could be anything. Once we upgrade to Schnorr, we could say there's no such thing as 160 bit hash anymore. The norms in cryptography advice are pushing up now towards larger sizes. You wouldn't generally deploy a new system at 160 bit security. There's 224 bit security rather than 128-bit on a .. so, NSA issued a public announcement recommending that people went to larger than 256 bit elliptic curve cryptography and nobody really knows what's motivating this recommendation. They were talking about quantum computing stuff. Going to 512-bit does not meaningfully secure you against quantum computing unless they know about engineering; perhaps they know about one vulnerability in a 256-bit curve and they don't want to announce which. Or maybe they have a vulnerability in 512-bit... maybe they were trying to screw over curve25519 by encouraging people to use larger curves. That's impossible to guess about. The trend is to larger cryptosystems than 256-bit, so I don't think we should promote 160-bit security to promote relatively small amounts of bytes, that's not interesting. If you want to save size, then do aggregation which is way more interesting than leaking out a few bytes from address size. Addresses are going to become unwieldable.
Similar problems show up with pairing stuff. 256-bit pairing, the security story is not as strong as the plain EC security story. If 256-bit security is too small, if you go to 500-bit pairing the operation becomes much slower, it becomes cubic.... The security is provably less in the sense that, in the abstract sense, it's at most as good. Any compromise of EC cryptography immediately translates into compromises on pairing schemes unless they are very specific to curves. Plus pairing gets several other families of potential attacks. It's not as strong in the abstract theoretical sense, and concretely it's all secure as far as we know. In elliptic curves, there's a particular type of optimization you're not allowed to do because it's insecure. In pairing, you are choosing curves in such a way that this optimization becomes secure again and then you're just using that.
I don't really know how to handle this cool new cryptography and maybe it's not secure, and how do we deploy it? I worry that if it's a main feature in Bitcoin it's not sufficient to say "buyer beware". If I can't evaluate the security of this, then none of the users can. You can try to tell people well this is probably less secure, so you should only use this for low value.. but nobody knows how to evaluate this kind of statement. And yet at the same time, the state of security research wont be improved unless it's deployed in applications that have value. Bitcoin is a nice use-ase if you want security research, just implement it and put money on it, every transaction is a bounty. Dan Kaminsky says it's a little strange that bitcoin is not instantly exploited, because it turns the proverbial rainbow gold into actual rainbow gold that is much more concrete.
NXT is vulnerable right now because of ed25519. How often do ... they have broadcast checkpoints, signed by the authors. It's all proof-of-stake 100% premined. You get small amount of payment for mining. So putting speculative crypto into an altcoin, I don't think anyone is going to care. And since they are checkpointing like that, they can just undo it. And the same is probably true for sidechain. You could put it into testnet, and some academics prefer to wait until it's in bitcoin. It's more interesting to publish about vulnerabilities in a production system, so that's why they wait to publish or attack.
We should write a blog post about this stuff, even if it's just to show people in academia like people are thinking about using this particular crypto in a production system. We've heard from people in academia yeah we think of interesting cryptographic systems all the time, but most of them don't get any use because for practical purposes they are not better than anything that exists. Up to a few years ago, people didn't upgrade to EC because RSA was enough. But in bitcoin this is not good enough because RSA would be huge signatures. We don't care about signing time in bitcoin, because it could be seconds, but verification has to be milliseconds or less and preferably under 100 bytes. Due to having different design requirements, and personally I think this is the one of the most interesting things in bitcoin, you're able to improve on the state of the art because the design requirements are different and unexplored. Wde should post on a blog that we're looking at different cryptosystems that might be interesting to a wider audience than just the actual users of the system. So Bryan have you written any of this? "No, I have been writing fanfic. You're a crypto wizard, Harry."
If we're allowing way more crypto signature validations to get into a fixed size block, still a megabyte or whatever... the pubkeys have to go into the chain too. For everything we have, there is even for polysig there's a proportionality between the verification time and the number of public keys. It's still linear in the number of bytes for all these schemes. You're getting at most 2x. Instead of having a 32 byte pubkey, you don't need a 60 or 70 byte sig... polysig or tree sig gets you back to status quo. It evens out. But not for the pairing stuff, that stuff- the CPU question is hard there. You're required to pad your public keys to 300 bytes if you want to ... (joke?). Well that's good, you can basically fit 2x or 3x as many transactions. Both in CPU time and in size. This is where general direction that I think segwit starts us on which is the direction of thinking about cost more than the size of... I don't think these sigop limit things are a good way to do things, I don't think so.
I can make this take less CPU cycles, but it's more bytes and I end up paying more. I'm doing an OP_CHECKSIG that I am almost sure would fail, and I'm thinking I should put this at the end in an if-statement. If you put, if you put an explicit zero, you can start with OP_IF, you don't need that. You can use 0 as the signature and it will fail immediately, we know that will fail and we don't do any expensive computation. It will still cost you a sigop, but since we're not sigop-limited it's sort of irrelevant. But if we were to do something like any pairing implementation either in for this checksig stuff or in a SNARK, we would have to be adding the cost of the pairing operations somehow for the cost metric of the block. The way you do it, in segwit, we change from the length of the block is no longer there, it's now a cost that is computed from the size but it could be added, CPU cost could be added to that, or CPU cost in some abstract machine. I think the real problem is that putting these things in the same scale is hard. You could equalize them for an assumed configuration, like having a CPU with a network and whatever, yeah I can get something that... we already struggle with that just for bytes. For Tadge's comment here, what's actually most important is to get the slope right in the sense that if you have something if you're paying nothing for this and something for that, it will make you make the bad decision. Maybe the ratio doesn't quite reflect the true cost of the network, but if it's in the right direction then it's much more sensible for network optimized or something. Having a single cost limit, and some things that contribute in a linear way, allows you to more-- what the factors are will affect what the optimal choice is, but you'll never end up with some pathological... you could do this for free now, which is something, if we put limits separately on the sig op cost and the bytes, add more sigops because the limit still has room left in it, that's not a super practical thing to limit but it could be exploited. The incentives aren't right for that. There are cases like, well, I'm already, you could choose to... something like... I'm already... the tampon in the future too. If I construct it this way, it's a little cheaper this way, it's arbitrary for the network. From the network, I can do it this way and it's cheaper but it actually costs me more bytes, same amount of sigops same amount of everything. Wed calculate the cost and do the cheaper of the two and it may not be cheaped rot the network. Another way to think about it is that there's a cost matrix with zeroes in many of the positions and everything, every cost matrix is a choice, we have chosen a cost matrix through inaction that is zeroes in all of these, and regardless of what the composition of the network is, the zeroes in all those terms is not the closest you can get. Op equaliverify and more op hash whatever can save me more bytes ,any approximation is better than zeroes in all these terms. The way to look at it is that someone comes up with a use case where someone really needs a transaction with 20k sigops, the entire business model relies on this and they will pay whatever it costs to get that in, even if the transaction would be 50kb, they get 950 kb for free because no other transaction can go in anymore so there's nothing else to charge for, so time to put your junk in there since you're getting 950 kb free subsidized free by the guys who are paying for all the sigops. But fortunately the limits are such that this is not likely to be a practical problem. But if we wanted to address checksig being slower by adding more linear constraints to the block, then these special situations where you get more space for free, become more of an issue. So we could put a cost on it, and I'm strongly of the opinion that any cost matrix with non-zero terms for things with non-zero costs, are better than cost matrices where cost is zero across the board even if you use very conservative numbers across all the coefficients. I don't know what the future is, though. I came up with a scheme that would be interesting but I think it has bad long-term behavior. You could imagine a soft-fork .... you could design a soft-fork so that it applies to height whatever and then it goes away. Before that point happens, you have an updated matrix, that affects the... how do you do measurements of the storage versus... you design a prototype environment, you say my environment is ant-enabled, she has satellite internet and raspberry pi, I don't think even just bytes are different whether they are in the UTXO or the witness or they are pruned. You can come up with a story that gives a rationalization. Is it unique? No, another rationalization is feasible, but my argument is that this is better than nothing. The problem with this is that it isn't so much that you can't come up with something, because I think you could, but the idea is that it goes away and thden you soft-fork in new ones, which creates a huge incentive for, if you have no ......... Validation should stay very cheap. You can have incredibly expensive contracts, you just need the threat of executing them, you don't have to actually execute them. And validation has to be kept really cheap.
Use tree of transactions instead. Just don't broadcast them. Yes this is pretty awful, Lightning is the NAT of bitcoin. With trees of transactions, you want to close the possibilities and eliminate conditions. If you are not eliminating spend conditions, then you should prefer MAST. Joseph Poon should explain his anti-MAST argument. Tadge says Poon sent an argument to the mailing list suggesting he is OK with MAST in some cases.
Wagner's algorithm https://www.eecs.berkeley.edu/~daw/papers/genbday-long.ps
random modular subset-sum
< gmaxwell> The actual break for the P*H(P) scheme looks like this. Say you want to steal key A. Start building a list of numbers H(2*A)/2, H(3*A)/3, H(4*A)/4 and so on until you have a huge collection... then you use wagner's algorithm to find a subset that sums to -1 mod N. and then if you substitute in the pubkey you see that cancels.