Race condition between block subscription and rpc call eth_getblockbynumber


I am trying to collect statistics from new block subscriptions with Infura. After successfully subscribing with Nethereum’s websocket client, I am receiving a block and the immediately requesting the block’s transactions using its block number. There seems to be a race condition between when I am notified that there is a new block and whether the block with its transactions is available.

To make sure of this race condition I added in a 0.5 second delay and retry loop, although this did improve the failure rate, I am still receiving null blocks from Infura.

I am not concerned about the corner case of a block with no transactions in it.

The RPC call:

Example snippet:

private async Task ProcessBlock(BigInteger blockHeight)
            var blockNum = new BlockParameter(new HexBigInteger(blockHeight));
            // Web3 BlockGetter
            var newBlock = await BlockGetter.Eth.Blocks.GetBlockWithTransactionsByNumber.SendRequestAsync(blockNum); 

            if(newBlock == null) // if null, infura request returned nothing
                for(int ii = 0; ii < NUM_RETRIES; ii++) // NUM_RETRIES = 3
                    await Task.Delay(500);
                    newBlock = await BlockGetter.Eth.Blocks.GetBlockWithTransactionsByNumber.SendRequestAsync(blockNum);
                    if(newBlock != null) break;

                if(newBlock == null) // a fix that I do not want to have in my final product
                    Logger.Warn($"Skipping Block {blockNum} not available from Infura after {NUM_RETRIES} retries");
            // Do stuff

Thank you,

Thanks for your detailed question Nabeel.

The request methods eth_getBlockByNumber and eth_getTransactionReceipt currently have some measurable latency between block and transaction receipt data sources. Our current recommended best practice is to retry with an exponential back-off until you receive the transaction receipt. I apologize this has been something we’ve been working on fixing for our users. We now are close to implementing a fix for this and will be rolling it out to production in a couple of weeks. You will be seeing an announcement from us once this new system is live and these propagation issues are resolved.

For now, if you need any help with the retry logic, please let us know we’re happy to help.