Ipfs object patch api

I am testing out the idea as using infura’s ipfs node as the service node(i.e. no local node), I understand it only enable a subset of API which mostly works.

The only thing that I wish it has is to have the ‘object patch’. The reason behind it is that if I say add a full site(i.e. ipfs add -r equivalent), it works but every files need to be sent over. But if subsequently only one file is changed, the whole thing have to be sent over again(even though storage wise, it doesn’t matter much). This is a waste of bandwidth, not to mentioned the 100MB limit.

With the object patch API, I can selectively patch the nodes which save lots of bandwidth.

BTW, when will the commercial offer available for ipfs ?

1 Like

Hi @gary thank you for this information, very helpful in influencing our future development. We will definitely look at the object patch API and consider it against our offering.

We have not put a time on the IPFS premium product release but it will be this year. We will keep everyone updated as dates become clearer.

Hi @mike, thanks.

Further testing and reading the doc, I found that infura opens the object/put api which is good enough for my usage(I can simulate the object/add-link and object/rm-link functionality).

yet another thing I noticed, the public gateway https://ipfs.infura.io/ipfs/* doesn’t allow HEAD. I don’t know the reason behind this but I think it is better to have this enabled than not(ipfs.io allows that so is cloudflare-ipfs.com)

The reason I need this is that I have noticed that if I have a ‘local node’ and add something to it, it took a while to have it loaded from infura.io endpoint. so one way I was testing is to do a HEAD to ‘retrieve’(cache) it.

Of course, if I use infura as the service node, I don’t need to do this for infura endpoint but I am doing it as a general pattern. This also helps in GC management IMO. So I don’t need to always pin things for semi-perm contents. only need to refresh it periodically and if new things replaced the old one, they will auto go away. just a cronjob of curl would do the tricks. Of course, GET would work too but that would use up more bandwidth(for both end if the content is large). Currently, I am doing a ‘GET’ then close to simulate the ‘HEAD’.