A (loosely written) Guide to Hosting an IPFS Node on AWS
We’ve launched an ipfs node over at ipfs.runfission.com
! Try it out by
- viewing objects at the gateway: https://ipfs.runfission.com/ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
- connecting over TCP at:
/ip4/3.215.160.238/tcp/4001/ipfs/QmVLEz2SxoNiFnuyLpbXsH6SvjPTrHNMU88vCQZyhgBzgw
- connecting over secure websockets at:
/dns4/ipfs.runfission.com/tcp/4003/wss/ipfs/QmVLEz2SxoNiFnuyLpbXsH6SvjPTrHNMU88vCQZyhgBzgw
Big thanks to our friends over at textile for an excellent tutorial series on setting up an ipfs peer.
We’re planning on writing up our own more polished tutorial series as well. But for now, here’s a walkthrough of everything we did to get the node up and running. All feedback/comments/questions are very much appreciated! If you decide to give this a run-through, let us know how it goes so that we can patch up any confusing parts/mistakes in documentation.
This guide includes 3 parts:
- Setup IPFS Node
- Setup s3 as a datastore
- Setup TLS + (Secure) Websockets
So without furth ado…
Setup IPFS Node
Setup EC2
- Launch instance
- We used Ubuntu 18.04 64-bit (x86)
- t2.large (t2.micro for free tier)
- add security group
- call it something like
ipfs-node
- you can open to all traffic now
- later you’ll want a more restrictive firewall
- All SSH traffic on port 22
- All HTTP/HTTPS traffic on ports 80/443 respectively
- All TCP traffic on port 8080 (gateway)
- All TCP traffic on ports 4001 - 4003 (ipfs connection ports)
- call it something like
- create new key pair
- name ours:
aws-ipfs-node
cd ~/Downloads
- make file readonly for current user:
sudo chmod 400 aws-ipfs-node.pem
- move to ssh keys folder:
mv aws-ipfs-node.pem ~/.ssh
- name ours:
ssh to instance
ssh -i ~/.ssh/aws-ipfs-node.pem ubuntu@$PUBLIC_DNS
- in our case:
ssh -i ~/.ssh/aws-ipfs-node.pem ubuntu@ec2-3-215-160-238.compute-1.amazonaws.com
install ipfs:
- download ipfs:
wget https://dist.ipfs.io/go-ipfs/v0.4.22/go-ipfs_v0.4.22_linux-amd64.tar.gz
(or latest version) - uncompress:
tar xvfz go-ipfs_v0.4.22_linux-amd64.tar.gz
- move binary to $PATH:
sudo mv go-ipfs/ipfs /usr/local/bin
- cleanup:
rm go-ipfs_v0.4.22_linux-amd64.tar.gz
- cleanup:
rm -rf go-ipfs
CHECKPOINT:
ipfs version
should print ipfs version 0.4.22
(or whichever version you downloaded)
initialize repo
- edit user profile for setting env variables:
sudo vim ~/.profile
(or whatever text editor you use) - add
export IPFS_PATH=/data/ipfs
(or wherever you want your ipfs repo setup) source ~/.profile
sudo mkdir -p $IPFS_PATH
- allow current user access to ipfs data
sudo chown ubuntu:ubuntu $IPFS_PATH
- init ipfs repo with
server
configuration:ipfs init -p server
- change config:
- set max storage with
ipfs config Datastore.StorageMax XXGB
(if hooking up to s3, bump this up quite a bit) - enable gateway:
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
- alternately, these can be changed by editing
/data/ipfs/config
directly
- set max storage with
run daemon
- create
systemctl
service - create file
/lib/systemd/system/ipfs.service
with contents:[Unit] Description=ipfs daemon [Service] ExecStart=/usr/local/bin/ipfs daemon --enable-gc Restart=always User=ubuntu Group=ubuntu Environment="IPFS_PATH=/data/ipfs" [Install] WantedBy=multi-user.target
- restart
systemctl
daemon so it finds new service:sudo systemctl daemon-reload
- tell
systemctl
thatipfs
should be started on startup:sudo systemctl enable ipfs
- start
ipfs
:sudo systemctl start ipfs
- check status:
sudo systemctl status ipfs
- should see something like
● ipfs.service - ipfs daemon Loaded: loaded (/lib/systemd/system/ipfs.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2019-08-28 20:38:04 UTC; 4s ago Main PID: 30133 (ipfs) Tasks: 9 (limit: 4915) CGroup: /system.slice/ipfs.service └─30133 /usr/local/bin/ipfs daemon --enable-gc ipfs[30133]: Swarm listening on /ip4/127.0.0.1/tcp/4001 ipfs[30133]: Swarm listening on /ip4/172.31.43.10/tcp/4001 ipfs[30133]: Swarm listening on /ip6/::1/tcp/4001 ipfs[30133]: Swarm listening on /p2p-circuit ipfs[30133]: Swarm announcing /ip4/127.0.0.1/tcp/4001 ipfs[30133]: Swarm announcing /ip6/::1/tcp/4001 ipfs[30133]: API server listening on /ip4/127.0.0.1/tcp/5001 ipfs[30133]: WebUI: http://127.0.0.1:5001/webui ipfs[30133]: Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/80 ipfs[30133]: Daemon is ready
CHECKPOINT:
Load $PUBLIC_DNS:8080/ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
in your browser and you should see the docs directory
in our case: http://ec2-3-215-160-238.compute-1.amazonaws.com:8080/ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
Setup s3 as datastore
Note: Do this before adding data to your IPFS node. this will corrupt any already added data
create s3 bucket
- we’ll call ours
ipfs-node
- make sure it’s in the same region as your ec2 instance
- leave public access blocked (only our node should be accessing these objects)
- get access keys for node
- go to My Security Credentials in aws
- Users in sidebar
- Add user
- we’ll call ours
ipfs-node
- access type: programmatic access
- for now: s3 full access, can tailor more later
- make note of access key and secret access key (note: do this now since you can’t access the secret key again)
build tools
- install
go
- must be same version as used to build ipfs
ipfs version --all
- `wget https://dl.google.com/go/go$VERSION.linux-amd64.tar.gz
-
in our case:
get https://dl.google.com/go/go1.12.7.linux-amd64.tar.gz`
-
tar xvfz go1.12.7.linux-amd64.tar.gz
sudo mv go /usr/local
rm go1.12.7.linux-amd64.tar.gz
- install build tools
sudo apt update
sudo apt install make
sudo apt install build-essential
- set env variables
- edit
~/.profile
:vim ~/.profile
- add
export PATH=$PATH:/usr/local/go/bin
at bottom - add
export GOPATH=/home/ubuntu/go
at bottom source ~/.profile
- edit
build and install s3 plugin
git clone https://github.com/ipfs/go-ds-s3.git
cd go-ds-s3
- build plugin:
make build
- if not building against most recent version of IPFS, set env variable IPFS_VERSION=v.X.Y.Z
- output is
go-ds-s3.so
- install plugin:
make install
- moves
go-ds-s3.so
to/data/ipfs/plugins
- moves
config ipfs to use plugin
- edit config:
vim /data/ipfs/config
- there should be 2 entries under Datastore.Spec.mounts
- replace the first with
{ "child": { "type": "s3ds", "region": "us-east-1", "bucket": "$bucketname", "accessKey": "", "secretKey": "" }, "mountpoint": "/blocks", "prefix": "s3.datastore", "type": "measure" },
- make sure that region, and bucket are set to the region and bucket name of your s3 bucket
- use the accessKey and secretKey you generated earlier here
- edit
datastore_spec
to match the new data store vim /ipfs/data/datastore_spec
- change to:
{"mounts":[{"bucket":"$bucketname","mountpoint":"/blocks","region":"us-east-1","rootDirectory":""},{"mountpoint":"/","path":"datastore","type":"levelds"}],"type":"mount"}
- again make sure that region and bucket match your actual s3 bucket
- restart ipfs:
sudo systemctl restart ipfs
- make sure there are no errors
systemctl status ipfs
CHECKPOINT:
upload a file to ipfs then check s3 to make sure it added ipfs dag object(s) there
example:
- download some image
wget https://fission.codes/assets/images/fission-1200x400.png
- add to ipfs
ipfs add fission-1200x400.png
- go check s3 and make sure it has ipfs objects
Setup TLS + (Secure) Websockets
install nginx
sudo apt update
sudo apt install nginx
- check status:
systemctl status nginx
- should see something like
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-08-28 21:23:08 UTC; 32s ago
Docs: man:nginx(8)
Process: 31246 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCES
Process: 31234 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status
Main PID: 31248 (nginx)
Tasks: 3 (limit: 4915)
CGroup: /system.slice/nginx.service
├─31248 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─31251 nginx: worker process
└─31252 nginx: worker process
CHECKPOINT:
Load $PUBLIC_DNS
in your browser and you should see the nginx homepage
setup a domain + TLS
- point a domain name to your instance’s public dns
-
ipfs.runfission.com
in our case
-
- add certs + keys to
nginx
- easy mode:
- some tools exist to help with this. See
certbot
for instance. Can generate certs + automatically createnginx
config
- some tools exist to help with this. See
- manually:
- import key + cert to instance
/etc/ssl/ipfs.runfission.com.key
/etc/ssl/ipfs.runfission.com.pem
- edit nginx config at
/etc/nginx/sites-available/default
- start with 2 servers
- first is a simple server to redirect
http
traffic tohttps
server { if ($host = ipfs.runfission.com) { return 301 https://$host$request_uri; } listen 80; listen [::]:80; server_name ipfs.runfission.com; return 404; }
- redirect
https
traffic on443
to the ipfs gateway at8080
server { server_name ipfs.runfission.com; listen [::]:443 ssl ipv6only=on; listen 443 ssl; ssl_certificate /etc/ssl/ipfs.runfission.com.pem; ssl_certificate_key /etc/ssl/ipfs.runfission.com.key; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
- import key + cert to instance
CHECKPOINT:
Load https://$DOMAIN_NAME/ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
and you should see the ipfs docs served over https!
add secure websockets
- tell ipfs to listen on ws port
- edit
/data/ipfs/config
- add
"/ip4/0.0.0.0/tcp/4002/ws"
toAddress.Swarm
- recommended: allow relay hop by setting
Swarm.EnableRelayHop
totrue
- restart ipfs
sudo systemctl restart ipfs
- edit
- setup a secure proxy with nginx
- edit nginx config at
/etc/nging/sites-available/default
- add:
server { server_name ipfs.runfission.com; listen [::]:4003 ssl ipv6only=on; listen 4003 ssl; ssl_certificate /etc/ssl/ipfs.runfission.com.pem; ssl_certificate_key /etc/ssl/ipfs.runfission.com.key; location / { proxy_pass http://127.0.0.1:4002; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }
- restart nginx
sudo systemctl restart nginx
- edit nginx config at
CHECKPOINT:
- go to websocket.org and test your connection with
wss://$DOMAIN_NAME:4003
The Real Test:
start a js-ipfs
node (using our awesome get-ipfs package) and connect to /dns4/$DOMAIN_NAME/tcp/4003/wss/ipfs/$PEER_ID
by either doing ipfs.swarm.connect
or adding the multiaddr to the node bootstrap list
wait a second (for the connection), print the peer list and make sure that your hosted node is included:
setTimeout(async ()=> {
const peers = (await ipfs.swarm.peers()).map(p => p.peer._idB58String);
console.log(peers);
}, 1000)