For those looking to ‘de-Google’ their lives, and control their own data Nextcloud is one of the best options out there.
I actually virtualize unraid within esxi so that one small 1U box can be my router / firewall and an unraid machine serving home services. Best setup I’ve ever had and learned so much along the way!
I run a cheap EC2 instance, and plug it into an S3 bucket for file storage, and my RDS MySQL database.
In my case, for purposes of hobby projects and various self hosted services, I keep both MySQL & Postgres RDS instances running in perpetuity, both t3a.micro. On demand pricing is roughly $13/month, but since I plan on keeping them running 'forever' I purchase reserved instances. For a 3 year plan, 'no-upfront', this brings the cost down to about $8.75/month. Much more palatable if you ask me :)
Also, I use them for multiple projects, so the convenience factor is worth it for me. For your NC alone, I imagine it would be good enough to just run you DB server on the same EC2 instance. I doubt the database storage would eat up much disk space.
You could however, rip through a ton of disk space from file storage, so I feel like S3 buckets are a must, and cheap anyway.
It syncs everything, the iOS app and web dashboard are adequate. I would recommend it (but I haven't tried anything else, other than Google Drive or Dropbox, of course)
I could nuke the app server, change hosting providers, or if there was a hardware failure or whatever, it won't matter. I can always spin up a fresh server, and plug back into my external DB and data store.
Performance was a little slow, but that could be down to my own hardware. It was just consumer grade i5 cpu and a basic SSD, in docker.
The examples they provide are good, but you cant really provide for every different config. I wanted to use traefik, so I brought the complexity on my self.
Heres where I got too, eventually stopping my trial of Nextcloud. https://gist.github.com/francis-io/935be5679b3308f5fbc3fe1bb...
My wishlist for future effort by the devs would be:
- Fully configured via env vars (and in Collabora too).
- I would rather any config or state be kept in the db. It makes backup and restore easier. Env vars could be set in the db, and any restart, has the set env vars overwrite anything in the db. I want to have confidence that I can restore a db + files and have a working service come back up. At the moment, I don't trust Nextcloud to always come back up.
- Keep config separate from user files.
- Focus on improving speed (which it looks like they are adressing with this post).
- Focus on more app usability. I remember in portrait it being hard to use.
Overall, the software is great and I'm looking forward to the future, but to store my personal data I will need to have a little more confidence.
(I can't seem to make a bullet point list on HN)
I ran nextcloud in docker-compose for 2 years, with nginx doing SSL termination in front. Granted I wasn't using the official image; I use the linuxserver.io releases for all my other services so I use them for this, too. Nextcloud's config is all in the DB, except for database and cache connection information in a single config file. PHP's config is in a separate file and some env vars (eg timezone).
I've recently moved it into my home k3s cluster (yeah, i'm one of those people), which means traefik is my new reverse proxy. Works fine. I found I can get traefik to do the DAV redirects at least with the k8s Ingress config, but I don't need to since the linuxserver image includes the redirects in its' nginx configuration.
I am with you. But. It's incredible how so many open source projects keep on delivering docker-compose files that either are not compatible with a reverse proxy or bundle a reverse proxy themselves.
It seems like the use case of having traefik/ngninx as a RP which does the SSL termination over how many services you want is fringe practice. Most of the apps/services I encountered could be blind to a RP but I often have to play around it.
> I want to have confidence that I can restore a db + files and have a working service come back up. At the moment, I don't trust Nextcloud to always come back up.
Well. Today OVH tried to upgrade things and it broke my VPS AND my owncloud db. Hopefully I had some sql dump backup but the DB was so borked I couldn't login in it even from root inside the container or in any other way.
I mean: don't trust the app provider to do the backup, set something up yourself.
> (I can't seem to make a bullet point list on HN)
For short points: indent with two spaces (longer become horrible on mobile). Or just do double newlines between the points like a normal person (;))
Though I do have a 4-line Caddy config and a Postgres server on the host.
I don't mind doing low risk patches every few months or weeks, but I don't want to do a major version upgrade every 4-6 months.
I did my last major version upgrade only 15 months ago, and I am now 4 major versions behind, which means:
1) I upgrade from 17->18->19->20->21 and hope nothing breaks!
2) I either start over with the latest version
I like that open source moves fast, but at some point, I just want to stop fiddling with it and let it run with minimal maintenance.
I did a similar path (started from 18 iirc) and nothing broke.
But there's a catch, because I have some safeguards in place:
1. Nextcloud has its own dataset in a ZFS zpool. I take snapshots hourly, and I took a snapshot just before upgrading
2. I run nextcloud and its own postgreql via docker-compose. the docker-compose file along with the configuration and data are stored in nextcloud's own dataset. This means that os-level dependencies are not a problem for me. this also mean that reverting the whole thing to before-upgrade is very easy: just rollback to the before-update snapshot.
3. (unrelated) snapshot are replicated to another location, which means that I might perform the upgrade on that other site and switch the dns when it's done and if i'm satisfied. I don't do that, for my personal use 1-2 hours downtime it's okay.
4. I'll let nextcloud perform its auto-upgrade procedures, take a snapshot after every upgrade, and at the end I'll perform the tasks suggested in the self-assesment page (adding indexes, changing columns types etc).
You don't have a nextcloud problem, you have a system administration problem.
While I enjoy setting up and playing with these service, I need to think about managing all these services as little as possible as I don't want to spend all my free time being a system admin.
Also, often a new release is not just a system admin task. Sure, it may not be _that_ hard to do a full backup, pull new docker images, spin them up and verify everything. The time sink comes from keeping track of all the releases of all the different projects, reading up about changes, how the upgrade process works, and so on.
On top of that, my family has become reliant on several of these services, especially nextcloud and bitwarden. The last thing they want are major changes to it. Long term stability with minimal changes can be a feature!
I managed to reduce administration to a minimum by using watchtower to automatically upgrade my containers and using mostly the :latest label.
This bit me only twice in a few years:
- with the 19-20 migration of Nextcloud, I had one big blank screen when logging in but the synchronization was working. Turns out it was a new default app (something about dashboarding) that was causing it. Googling an fixing took an hour.
- with one upgrade of Home Assistant where my devices were not available anymore, there was a problem with the upgrade which they fixed quickly but I have already upgraded. Reading the docs/forum and fixing took an hour.
I can live with these two hours across two or three years.
I backup /etc on my server with Borg and I know that, worst case, I will recover. I tested this DRP two weeks ago bare metal (recovering to an empty VM from scratch, that is an ubuntu ISO and ultimately getting my encrypted backups from a friend's system -> it really helped to highlight what I was missing)
I use a home-grade PC with Ubuntu LTS on which there is nothing except for:
- borg (backup program)
- wireguard (VPN)
I then copy /etc/docker from backup, mount some external disks with the data (either backed up or not for things I do not care about), reboot and I am done.
My recovery lasted one hour from starting the download of the ISO to being back on line.
However the only proper backup solution that I could confidently state would allow me to recover should disaster strike was the one you just explained e.g. putting everything in docker and snapshotting the entire filesystem. At which point I'm basically running 3 virtual file systems on top of each other just to have a better UI, which seemed a bit silly.
The thing is: you have a system administration problem, whether you want or not (that is a big part of what you're actually paying for when you buy Dropbox or when you let Google feed on your data).
Now, as an hobbyist, when you start depending on services you set up and manage yourself, it would be a good idea to take some time to learn additional tools to enjoy your hobbies more.
Think about this as in "leveling up" your hobby.
Now on a lighter tone, there are simpler ways to have a backup strategy, as long as you are okay with lower guarantees.
You might not use zfs, and use simple LVM snapshots. You might want to use no snapshotting at all and just do a nightly backup via a cronjob: at 3AM you just switch everything down (docker-compose down if you're using it), do a rsync to another host, start it back up. It's way simpler but you'd only get a yesterday's copy in case of problem.
But then again, that would safeguard you when doing upgrades: disable backup, perform upgrade, test everything, re-enable backup, resume operations. Worst case scenario you rsync back the yesterday's data and you resume normal operation.
I have a restic backup running on that plan instead of rsync, which means I get true backups. The nice thing about that it is that this can be integrated into any "docker compose" pipeline that you like. I'm generally not as hot on Docker as a lot of people but it does do a nice job of containing household services into a text file that can be easily checked into source control, and easily backed up, as long as it can be run in docker.
It's a pity that Sandstorm started before Docker was a practical option for most people. There's probably some room for a Sandstorm 2.0 that "just" uses Docker and provides some grease around setting up this stuff on a system from a top-level configuration file or something. It would go from a massive project in which you have to "port" everything to something some hobbyists could set up. It wouldn't be as integrated, but it would work.
Though perhaps there was a shim layer? Eg over normal containers, it shimmed network/disk from the container over the Sandstorm RPC buffer?
Really cool tech regardless, but it had a big tech maintenance burden. That's my fear in all these self hosted apps. Everything needs to be maintained for it to feel good to the user, and that seems like such a tall ask.
yeah, yeah, absolutely. rsync is the first thing that came to my mind, but any tool that does a similar/equivalent job is fine here :)
Right. You can pay people to do things for you, or you can do them yourself, but either way the things have to be done, and they should be done by someone who is good at it and has a contract with you -- employment or otherwise.
I'm not 100% okay with this statement.
One has to be able to start somewhere. How do you "get good at it" ? You proceed via steps. you challenge yourself, you reach an improvement, enjoy that improvement for a while, then you challenge yourself again when you see room for improvement.
But just saying "nah let somebody else do that" is not what we want here. We're hobbyist, we want and enjoy doing stuff ourselves. Doing a sub-optimal work is okay, we will improve over time :)
sharing our experiences and procedures here is part of that
For some hobbyists there's comfort in this repetition; for others, it's just a time sink with high opportunity cost.
What we're seeing is largely centralized applications and the work it takes to manage them. Ignore UX for a second, and imagine you wrote a database on top of a distributed system - ala IPFS - and all modifications were effectively pushed into IPFS. This suddenly boils the system administration tasks down to:
1. make sure my IPFS node is up to date
2. make sure my computer is online
And even those can be heavily mitigated with peers who follow each other.
Now we're not there yet, i'm not advertising a better solution. I'm simply saying that part of the administration is a heavy lift simply because of how these apps were written. I think we can do better for the home user.
Secure Scuttlebutt is a lot easier to maintain, for example. The most important thing with that is that you simply connect to the internet and publish your posts/fetch other posts. In doing so, other people make backups for you and you of them. Backing up your key seems like the highest priority.. and even that could be eliminated i imagine, in the P2P model at least. Very low maintenance.
Nah. I had an elaborate home setup for a while as a hobby and the ongoing hassles (including NextCloud upgrade complexities) just led me to turning it all off and making do with simpler or no solutions.
I’ve learned my lesson about mixing hobbyist tinkering with something your family comes to expect as an everyday convenience - that while you on a random Saturday morning might be hyped about deploying the latest self hosted cool stuff, the other you on some random Thursday at 10pm when everything malfunctions is gonna hate past-you’s guts for putting you in this position.
or be a parent of a geek and have it done, with 24/7/365 support and training, and remote support of some magical things like "hey! I had a button appearing and I pressed it and now I am not sure I have internet anymore". Of course said "customer" has no idea about what was on the button. Etc. etc.
I am the geek and I love my parents :)
But you're missing an important point of view: do you rely on that data?
If it's a toy project, don't even bother, just ignore all my replies.
If you do rely on nextcloud and the data stored there, having a backup procedure and safeguards for the upgrade process helps a lot.
Next time you perform an upgrade you can proceed without fears and stress, and way faster (if you run on docker) and that frees up time to play with kubernetes clusters and webapp development :)
An LTS connected to a NAS would avoid all of that. Lol.
You're already getting quite a piece of software for free, demanding extended long-term support isn't really fair, expecially if you consider that they offer a simple update procedure.
Many software has it so it's not unreasonable to simply discuss something that would be nice
Yeah, that's why I pay for a managed K8s instance for my toy projects but do my own sysadmin work on various self-hosted things. The former is not my hobby so I'd rather pay someone else to do it.
This is an inherent limitation of our current tech stack, and unfortunately the cheapest mitigation we have is "take full system snapshot a.k.a. do your sysadmin work". The alternative (LTS release etc) all cost much more money.
It's literally just branching at one release and fixing bugs in that release for a few years, which also benefits upstream branches.
That way people may lose new features but gain stability.
This takes engineering time, i.e. money. It may also benefit upstream branches, but again porting patch between branches takes time, especially after massive refactoring happened on latest branch.
This sounds like a system administration problem.
Why, exactly, did you jump to docker/etc instead of what everyone (including NextCloud) recommends which is basically "keep a copy of your nextcloud folder and a dump of your database"?
If you're not confident you can properly recreate your nginx config, then keep a copy of that too.
At that point you're literally like four steps to restore from a blank slate:
pkg install nginx php74 php74-extensions mariadb105-server
mysql -e 'CREATE DATABASE nextcloud;'
mysql nextcloud < backup/nextcloud.sql
rsync /path/to/backup/ /
(FWIW, my backup strategy is cron running a shell script that "rsync/mysqldump to second disk; rclone off-site". I've recovered from this successfully (from my local copy, no transfer times) in about a half hour.)
Full disclosure, I'm the developer.
Pretty cheap, it takes away the administration burden and you are the one in control :)
Those aren't mutually exclusive. Sure, better dev ops would make major upgrades safer and easier. But for a hobbyist self-hosting their own instance, a LTS release would be a godsend to save them hours of unpaid work.
What a great idea!
I imagine that those of us that want that kind of stability are encouraged to go with their hosted offering, but hopefully they'll see the value in having a slower and/or more stable release process.
For what it's worth, the upgrade process for the last few major versions went mostly without a hitch for me. I do have to give them credit for that. The only thing I continue to struggle with is the encryption design. I always end-up with some odd state for some files I cannot recover from.
disclaimer: i have updated several version, but haven't upgraded to version 21 yet (it just got released)
I did not set mine up with this, but it apparently requires a lot less hands on maintenance. In your case you might be interested.
Apparently it auto-updates for you, but I'm not sure if it will upgrade major versions, or only security patches.
I just wanted to keep getting bug/security fixes for NextCloud.
I stay on the stable channel, and I get a notification if an app or nextcloud itself has an upgrade. The biggest issue is that the "Security & setup warnings" sometimes tells me I need to upgrade my database (and gives me the exact commands to do it) after an upgrade.
I will note that the upgrade has taken longer over the years (it used to take 5 minutes, now it can take over 30 minutes), and I think there is an issue with the backing up stage.
Every time it's basically:
mv nextcloud nextcloud.r19
mkdir nextcloud && pushd nextcloud && tar -zxf ../nextcloud-r20.tgz
cp nextcloud.r19/config/config.php nextcloud/config/config.php
# set permissions
sudo -u php php occ upgrade
The instructions they provide for a manual upgrade have never failed for me: https://docs.nextcloud.com/server/latest/admin_manual/mainte...
As far as software that needs upgrades, NextCloud has definitely been one of the least annoying things I have to deal with.
I don't know why they go about it in such a manual way. If you don't like the web installer, there's a command line version that does everything for you (upgrader.phar).
Because I don't generally give the code permission to modify itself. Principle of least privilege and all that.
Outside of this one specific situation (upgrades) it's not needed, the rest of the time it's just one more layer of security in the way of various forms of exploit. (Maybe it's just trauma from dealing with the 8,000 forms of wordpress exploits back in the day and dealing with finding half of wordpress having random code added to it to persist exploits/randomly redirect people to scam sites/etc)
In the end it adds like 5 minutes of inconvenience to my upgrade process.
In their defense, the software has grown a lot and does a lot more things nowadays, it's understandable that the upgrade process takes more.
Ah, that might be it.
IIRC there's a database entry for each file, if you've got a lot of files it might take a while since on upgrade it also run database migrations to adapt to the new schema, that might take a while.
I've done this since about version 11. And I usually only get around to upgrading every few versions so it's been like... 11->12->13->14, 14->15->16, 16->17->18->19.
I do each upgrade one by one. Upgrade, login, check system status and resolve any additional steps it suggests (e.g., adding indices/columns, etc) then jump right into the next upgrade.
I've never had one fail on me. Even doing 3-4 major versions at a time it's usually less than a half hour problem.
Miration 18->19 is now stuck on
Step 4 is currently in process. Please reload this page later.
which is downloading zip with new version...
I restarted installation multiple times, increased pfp-fpm and nginx timeout to 660 seconds and still getting this error.
% php /var/www/nextcloud/updater/updater.phar
sudo -u nginx php updater/updater.phar
Nextcloud Updater - version: v18.0.9-8-g27dac77
Current version is 18.0.14.
PHP Fatal error: Uncaught Error: Call to undefined function NC\Updater\curl_init() in phar:///home/owncloud/updater/updater.phar/lib/Updater.php:455
#0 phar:///home/owncloud/updater/updater.phar/lib/Updater.php(119): NC\Updater\Updater->getUpdateServerResponse()
#1 phar:///home/owncloud/updater/updater.phar/lib/UpdateCommand.php(147): NC\Updater\Updater->checkForUpdate()
#2 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Command/Command.php(256): NC\Updater\UpdateCommand->execute()
#3 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(820): Symfony\Component\Console\Command\Command->run()
#4 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(187): Symfony\Component\Console\Application->doRunCommand()
#5 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(118): Symfony\Component\Console\Application->doRun()
#6 phar:///home/owncloud/updater/updater.phar/updater.php(10): Symfony\Component\Console\Application->run()
#7 /home/owncloud/updater/updater.phar(10): require('...')
thrown in phar:///home/owncloud/updater/updater.phar/lib/Updater.php on line 455
Sort of like docker - do you have to go through their root namespace for everything?
petty as heck but nextcloud being entirely php (afaik) until now has been a huge turn off. Moving some critical online bits to rust is a huge indicator to me that the team is taking resource consumption & performance optimization seriously.
I maintain a list of software to help simplify the networking bits:
Also, I wish nextcloud talk was using Matrix, there seems to be much duplicated effort between the two, and I am not even sure Nextcloud Talk federates.
Otherwise I completely agree with the sentiment.
I don't like syncthing on mobile because it needs to maintain its connection to sync and therefore drains battery. Also, there isn't a way to have less than 100% of a particular share local to the phone. This isn't usually waht I want on my phone.
Or am I misunderstanding your point?
I was just commenting on your migration to Syncthing, which is a superior syncing app IMHO. It is just that when I was using it I realized that I am missing the share ability, which is avalable in Nextcloud, though my (somehow unhappy) travel the other way round from Syncthing to Nextcloud.
I think that Nextcloud is trying to cover too much things, with half-baked apps.
Either independent contributors who make money as consultants, or a foundation that gets sponsoring, or a commercial company behind the project: enterprise has the money. So inevitable, it will gravitaye towards more enterprisey features.
I'm not saying that I have knowledge about what happens here with Nextcloud. But in FLOSS this has been seen often: from Drupal to LibreOffice: it moves away from 'consumers with simple needs' and towards 'heavy users'.
I also wish they had a separate "light" offer with just the storage and a few basic apps. As it is, I think they are stretching their resources and some part of their offering is going to suffer as a result (we already saw quite a few severe bugs in the past year and some basic functionalities, like file locking or caching, is still not right). Personally I'm only staying with Nextcloud because there's unfortunately no good alternative for now.
So personally I very glad they are not just trying to be yet another cloud storage tool but also working on these IMHO more important cloud services.
My experience with the Nextcloud Android app is that the automatic sync is quite limited (eg. https://github.com/nextcloud/android/issues/757, https://github.com/nextcloud/android/issues/19). Every change has to be manually synced by opening the app and navigating to the Sync option for each file. This is pretty much a dealbreaker for me, but it looks like a lot of people are using Nextcloud successfully. So I'm curious how your usage differs from mine - do you only use it for static unchanging files that don't need to be synchronized that often, or is the sync situation smoother on other devices?
Setting up my wife with NC on mobile, however, reminded me of lots of ways in which I've accustomed myself to some pretty weird behaviors, like manual syncing, the built in text editor that doesn't load without being online.
I love NC (I use it both for personal needs and with students in my lab) but there are definitely UX issues that present a barrier to new users.
DAVx5 for caldav stuff, Nextcloud Notes for notes.. These apps seem to handle the sync separately on their own.
The problem I see with similar services is they all trying to pack everything. You can also install external components into your system.
What it means in practice is huge area for security vulnerabilities, challenge to host/upgrade it at home on weekends and very complex user interface (easy to mess up with privacy settings).
I really scared to host such systems because of all related issues. Maybe it isn't big deal at all.
Probably, most of home use cases can be resolved by simple XMPP server (video calls, group chat, image/links sharing) plus some shared folder across the network to store some files/photos.
I don't care for whiteboards or collaboration, I just want a Dropbox equivalent where I can upload files and give other people public or one-time or expiring links to download/wget.
What I use it for ?
1. Notes (Use FSnotes and sync md files)
2. Keypassxc for passwords (sync it using Nextcloud)
3. Photos upload (From Amazon & Google)
4. My recordings & videos
5. Documents (Moved from G Drive)
Where I would like to see improvements?
Photos - badly want this to be usable on mobile phones
I am happy overall with Nextcloud. The only time I screwed up is when I didn't know about the upgrade process. Tried moving from 18→20 and totally gone wrong.
I use docker-compose and nextcloud is much different than all my other containers.
Edit: I was eager to see the link with the 10x performance number. I do hope it improves because we are in need of a service like that.
Static file serving is easy. If you don't even need SSL because it's all signed content, it's really easy. Linux has a syscall  where you can tell the kernel "ok, now, send this file through this socket without bothering userspace anymore", meaning you get full kernel-mode file transfer without even context switching. I've got static file servers serving similar types of content shipping out dozens to hundreds of megabytes per second that barely hit 3% of one CPU usage.
> Presumably this newer backend does less stuff
Presumably not in terms of removing features, but in terms of having been refactored.
As far as I know it's very rare that someone bothers with exploiting denial of service bugs, but given how trivial (triggerable by hand) this was, it's still a bit risky.
The bug was of course reported to them but closed as wontfix dontcare because there were too many other ways of taking it down already. Php was blamed iirc (which really isn't the culprit).
Can you be more clear about what you mean by "a friend found that you can take that whole system down from a 56k modem"?
I have no idea what you mean by that. You mention denial of service. Are you claiming a Nextcloud instance can be DoS'ed by a single computer with a 56k internet connection?
Respectfully, that is quite a sensational claim/ stance to take.
Without posting the specific exploit, the issue is with the server-side sleep() in the login system. If you spawn enough threads, which you could easily do in the given time from even a 56k modem, it will for some reason crash the whole thing. Tested with a couple friends and all the instances had to be restarted manually, none of them (running on different web servers) withstood it. It's not clear why as the sleep should simply run through and then unblock the threads; for some reason that's not what happens.
Again, this was reported and they don't care. If you want more info, this should be enough to reproduce it without much effort and/or ask them about it (not sure if they made the ticket public, initial report probably was presumably private due to the pre-auth/unconditional nature).
That doesn't sound good. I guess as a personal user I'm not too worried about being DoSed, but that would certainly be more of a concern for a large organization evaluating the software.
If that is the case, then I certainly have an 'eyebrow raised'.
For example, from my machine, i can connect to my nextcloud, and also to some folders shared from my group's nextcloud.