Does the time it takes to filter past events increase with larger filters?

When using Infura the TTFB for the getPastEvents call (below) is around 2.5 seconds with a filter list size of around 5-6 elements. Will that time increase with a larger filter and is there a way to speed this call up?

Background: When displaying a list of data from a storage contract the user will want to view the tx that was responsible for the data. After loading a list of records I run the code below to fetch the past events which leads me to the transaction details. This way users can view the record data and then click a link to view it on a block explorer. Loading the records is around 0.5s and then the getPastEvents call is very slow, around 2-3s.

contract.getPastEvents('Inserted', {

filter: { _recordId: [recordIds] },

fromBlock: 0,

toBlock: 'latest'

},(error, events) => {


}

I was just thinking maybe a possible feature Infura could provide are cached events for specified Contract Address, and Event Name, that way the call for past events doesn’t have to search the entire event log.

1 Like

It will definitely take longer for requests with a larger filter. Great idea for a feature, we have discussed this internally as well. Have you tried setting up this type of caching on your side?

1 Like

I thought so, and thanks. Actually haven’t thought of us implementing an events cache for our users public dapps. Maybe that could work, still leverage Infura directly from the client with Web3 for the records since they return quickly for now. We would just need an event listener daemon running at all times pulling contract address and filter data off a database and also writing the necessary event data we need which wouldn’t be the entire event back to a database. I’ve been hesitant to start archiving redundant blockchain data I can get directly from a node, but if that’s the only way, I guess we don’t have an option.

Maybe as a minimum experiment you can try caching information obtained from a specific user and check that cache before querying a node/Infura again that way info already queried should return much faster and is guaranteed to be consistent.

Not a bad idea, the one thing I worry about is opening our system up to traffic from publicly available applications when really they should be directly interacting with the blockchain layer using their wallet. We don’t yet support that architecture either, all requests into your API come with auth keys. Our user generated dApps are publicly available not behind an auth wall and run off a single JSON file we serve from a public readable S3 bucket/cache. It would really be ideal if the blockchain layer could provide fast enough response times for events that we don’t have change our system to handle public loads.