Simple file sharing between devices and people using Fission

Description

Easy large file sharing between devices / people

Inspired by

Blaze getting #1 product of the day on product hunt.



https://blaze.now.sh/

User Impact

Anyone looking to move files between people and computers

Fission impact

A very general use case that can get us a lot of visibility.

How it works

  1. User opens the site
  2. Behind the scenes we open ipfs
  3. They click a button to upload a file
  4. They choose the file
  5. We give them a link to copy / share / email once we have the cid
  6. We ask prompt them to register with a fission account or tweet about us if they want us to persist it on the network for x days.
  7. When a user opens that link we show a page that asks the user where theyd like to save the file
  8. It saves in the background via ipfs.

Features

  1. Upload UI
  2. Download UI
  3. CLI interface
  4. Option to have user persist file on the Fission network for X days
  5. Social share

I really like this idea. We’ll probably be serving files to the receiver over HTTP, but we may be able to hack that with a little page that loads the service worker. If the idea is “large files”, then 1-2MB to load IPFS is not the bootleneck, and you may get a speed boost from the torrent-like effects of IPFS. Imagine a bunch of people in an office pulling down the same file, and serving it to each other, for instance

1 Like

I like this because it was the first thing I tried doing with IPFS.

I’m still learning so I apologize if this is 101 territory, but I’ve been wondering if it’s possible to optimize transfer speeds by spreading child blocks across the network then sharing a list of peers that a receiver could manually connect with.

Is this possible? Is this how Fission already works?

Yes, that is roughly how IPFS (the thing underneath) works. All blocks (chunks of files) have a unique key associated to them, and there’s a deterministic routing table to ask who has which block. You then connect to peers that have that block, and stream it in.

The manual connection part (i.e. ipfs connect <peer_id> is really to get you out to the broader network. You’ll still get actual blocks directly from whoever has it :+1:

So, hypothetically:

If user A wanted to send 1,000 blocks to user B and you controll 100 peers, could you direct your peers to host 10 blocks each? Then, when user B requests it, they can directly connect to your 100 peers and 100x their download speed?

Not really tuneable like that. It’s built into the protocol and as I understand it — if you are directly connected to multiple peers who have the files (blocks) then yes it will stream from multiple peers.

Let me rephrase it as a question and let’s see if we can add it to the FAQ and ask some protocol experts:

How many peers will stream the blocks of a file to a requesting peer?

1 Like

I’ve put a little more thought into this after reading up a bit on IPLD. I think there’s a couple options that might be available for giving a speed boost as an intermediary in a file transfer.

The first and simplest version could be setting up multiple peers on a single server. By sharing access to a common data folder, peers could seed different parts of the MerkleDAG in parallel using the built-in IPFS algorithms.

The second option would require more coordination but could allow multiple machines to seed in parallel with minimal duplication of data. It would involve chopping up the MerkleDAG into roughly equal branches of data using IPLD selectors (eg root/to/branch/a/*). Those branches could then be spread across servers each operating as an individual peer.

The third option is the original idea with extra language to clarify. As I understand it, all the data in the MerkleDAG is held in the leaf nodes with each leaf having its own unique CID. That means the leaf CIDs could be requested directly as root nodes before final reassembly using the full MerkleDAG for integrity.

A fourth option would be like a combination of two and three. Instead of a potentially complex algorithm for generating IPLD glob selectors (eg root/path/to/branch/a/*), it would use a lazy approach to generate a list of selectors down to each leaf (eg root/down/to/leaf/CID).

Now, it’s important to note, this does hinge on Boris’ question about whether there’s any performance advantage to multiple peers.
My impression of IPFS is that the more widely available a file is, the more effective it becomes. That’s the torrent style aspect of things.

Also, options 2, 3, and 4 might require more code execution on the requestor side. Instead of sending the original file’s root CID, the requestor would get a manifest including the root plus a list of leaf node CIDs or IPLD selector paths plus a list of peer IDs to configure the swarm. The manifest would then be used to set up a burst of asynchronous IPLD get requests.

Assuming there’s merit to any of these approaches, there’s an opportunity to monetize using a pay-per-peer model. Depending on the server architecture, it might even be possible to offer speed control on both the sender and receiver side, perhaps even mid-download…

Am I still missing something here?

I found this:

The CORS stuff always seems complicated and I have my IPFS daemon in my Ubuntu container — so it’s not totally running “locally”

Need to dig through it more.

1 Like