Hi! I’m looking for a good cloud storage provider for my backups. I will encrypt them locally and rclone them, so integration is important. I’ve been looking through reddit, and every single provider has something behind their ears (closes accounts, scans files, sketchy, blah blah blah), so I’m having a bit of an analysis paralysis.
Free tier would be ideal. I don’t need a lot of space, just a few GBs. Thanks :)
Scaleway S3-compatible storage with Duplicati.
Edit: if it’s text-files I’d just use plain S3 (since ease of orchestration with Terraform) but whatever you decide to go with. I see that you have selected BackBlaze B2 which is quite good
i’ve been trialing wasabi.com and switched to a paid plan recently, 1TB for $7 with no egress fees using daily restic snapshots btw
Backblaze will only charge you for what you use. So 1TB is $6 per month, but 500GB is $3
Yep, I went with Backblaze and Mega as secondary.
I’ve heard good things about https://www.tarsnap.com/ and https://www.rsync.net/
@jaykay@lemmy.zip maybe you can try filen.io
Restic + rclone [1] is a good combination. Supports encryption, versioning, dedupe, snapshots etc. When I looked into offsite backups a couple years ago I was originally focused on the cost for storage but then realized data transfer costs can add up too.
After doing evals on S3, Wasabi, Backblaze and Hetzner with Restic I ended up going with Google Drive. Flat annual price and no data transfer fees. Since Restic does all encryption locally I’m not worried about what the Big G can see.
Oh wow, brilliant. Gonna have to do some testing with restic. Looks like I could use it to roll my own Crashplan-to-friend that no longer exists
Wasabi S3 is nice and cheap.
You’ll only pay what you use, so probably just a few cents in your case.Oops, nevermind:
If you store less than 1 TB of active storage in your account, you will still be charged for 1 TB of storage based on the pricing associated with the storage region you are using.
Tbf it’s like 8 bucks a TB for anyone else reading this with 500 gigs or more
Backblaze, very cheap if you rarely recover the data
If we’re talking about Backblaze B2, downloads are free for 3x your average stored amount of data IIRC, so most recoveries would be free.
You could also pull all out through cloudflare and then it should be completely free
Wow, makes B2 hard to beat
AWS S3 has a free tier that covers the first 5Gb. I recommend it because the AWS cli is excellent, and gives you lots of options for how to sync your data. The pricing is $0.023/GB/month after the free tier. It can be overwhelming to get into AWS but it is worth it to have access to the ultimate IT service swiss army knife.
Wow, $24/TB? That’s 4x the cost of Backblaze B2? (Am I doing that math right?)
It’s complicated. I gave the most expensive pricing, which is their fastest tier and includes stripping across three availability zones and guarantees 11 nines of data durability. Additionally, the easy integration with all other AWS services and the feature richness of S3 buckets makes it hard to do a fair apple to apple comparison unless you really have well defined needs. So I gave the highest price to keep it simple, and for someone who says they just have a few GB, any cost should be trivial.
How much is their cheapest glacier tier? Seems complicated to calculate, seems there’s some relation to s3 storage or I’m just missing something? Haven’t looked that closely.
So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let’s say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is ~$24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacierThe cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let’s say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn’t make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I’ve literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don’t have to worry about the cost per object, etc. iDrive (not an Apple service) let’s you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.
Wow. Thank you for that incredibly detailed explanation!!
It does sound like though that it is POTENTIALLY cheaper than something like B2, but also much easier to misconfigure and end up in a more expensive tier.
Seems to me unless you have a reason to use Amazon storage or already have something using it, using it for backup isn’t the best idea.
That’s a good takeaway. AWS is the ultimate Swiss army knife, but it is easy to misconfigure. Personally, when you are first learning AWS, I wouldn’t put more data in then you are willing to pay for on the most expensive tier. AWS also gives you options to set price alerts, so if you do start playing with it, spend the time to set cost alerts so you know when something is going awry.
Have a great day!
Don’t hate me because I’m using Google Cloud Platform buckets…