It doesnāt seem to be a residual issue of ipfs-cluster. Iām able to reproduce the issue on production today.
I ran lots of .publish
-es by changing stuff in the Flatmate app on both my phone and laptop at the same time. At some point, their data got out of sync, but both were still loading fine. Looking at the console on the phone, this made sense:
š DNSLink is outdated (1 newer local entry), using local CID: bafybeibrbnaheqojyju45gyohejge6x43p6yedxoqdzjs2ftzs4w7qvpoy
However, that message didnāt go away with more .publish calls.
It did go away, when I did some actual filesystem changes and called .publish:
š Adding to the CID ledger: bafybeibxrkjpbuxfkj63ky5bc4ophxzweztmllqvb7tub2a77qinqae52y
š Updating your DNSLink: bafybeibxrkjpbuxfkj63ky5bc4ophxzweztmllqvb7tub2a77qinqae52y
š DNSLink updated: bafybeibxrkjpbuxfkj63ky5bc4ophxzweztmllqvb7tub2a77qinqae52y
Then, my laptop was able to fetch the newer version as well:
š DNSLink is newer: bafybeibxrkjpbuxfkj63ky5bc4ophxzweztmllqvb7tub2a77qinqae52y
However, the laptop now got stuck at webnative.initialise
.
It had this hash in the wantlist: bafkreiaj3frv75q7wy5ei4d3ck5znqslquul63a43xh4zwjkrh4j6nkxom
I checked my phone, it could actually fetch that CID:
async function toArray(generator) {
let chunks = [];
for await (const chunk of generator) {
console.log("got another item");
chunks.push(chunk);
}
return chunks;
}
toArray(ipfs.cat("bafkreiaj3frv75q7wy5ei4d3ck5znqslquul63a43xh4zwjkrh4j6nkxom")).then(x => console.log("done"))
got another item
done
I took the data from ipfs.cat
(from my phone) and manually (via copy-paste) ipfs.add
ed it to my laptop.
(I think I just invented a new transport type: Transport on OSI layer 8 )
This got me one step further. Webnative would now successfully webnative.initialise
. This leads me to believe that the private/Apps/matheus23-test/Flatmate
directory was able to load. However, it gets stuck on reading the state.json
file within it.
So I did the manual ālook for what hash it wants, go to the other node, ipfs.cat, then copy the data over to the other node and ipfs.add itā again.
As soon as I finished doing that with two more hashes, it loaded fine.
Here are my takeaways:
-
Itās possible to get into a state where the sever ipfs cluster doesnāt store all data behind the dnslink by doing lots of (conflicting?) publishes
-
The data canāt be fetched from one device to the next via js-ipfs. The nodes donāt discover each other for some reason.