Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

atmur@lemmy.world to Selfhosted@lemmy.world – 965 points –
187

No if I have to keep fixing it , it is not worth my time.

I installed owncloud years ago and came to the same conclusion and just got rid of it. I use syncthing nowadays though its not the same thing.

Yep, I've adapted all of my setup to syncthing, and never looked back.

Any guidance on this? I looked into Synthing at one time to backup Android phones and got overwhelmed very quickly. I'd love to use it in a similar fashion to NextCloud for syncing between various computers too.

Well, it works in a different way than NextCloud. You don't have a server, instead you just make a share between your computers and they are all peers.

It takes some getting used to the idea, but it's actually much simpler than NextCloud.

So if I wanted to sync photos from my phone to the computer, then delete the local copies on my phone to save space, that would not work?

E: But keep the copies on the computer, of course

You would have to move them into some folder you are not syncing.

@squidspinachfootball @marcos Syncthing syncs. It does one way syncs, but if your workflow is complex and depends on one way syncs that's probably not what you want.

Sync things between operational systems, then replicate to nonoperational systems, and backup to off site segregated systems.

It really wasn't all that complicated for me. Install the client on two devices set a share up on one device go to the other device Hit add device put the share ID in. Go back to the first devices admin and say allow the share

I was very intimidated as well, I'll try to simplify it, but as always check the documentation ;)

This is the process I used to sync between my Windows PC and Android phone to sync retroarch saves (works well, would recommend, Pokemon is awesome) I've never done it on a Linux, though i assume it's not too different

https://docs.syncthing.net/intro/getting-started.html

I downloaded the Synctrazor program so that it would run in the tray, again I'm not sure what the equivalent/if this would be necessary on Linux.

No shade to the writers, but the documentation isn't super noob friendly, as I figured out. I'd recommend trying to cut out all the fluff, and boil it down to bare essentials. Download the program (whichever one seems right for your device, there's an app for Android) and follow the process for syncing stuff (I believe I used a video guide, but it's not actually as complicated as it seems)

If you need specific help I'd be happy to answer questions, though I only understand a certain amount myself XD

I'm absolutely at that point with Nextcloud. I kind of didn't want to go the syncthing route, but I'll probably give it a shot anyway since none of the NC alternatives seem any better.

I tried nc it for a while I would have taken me till the end of days to import all of my files.

I suspect I could keep it running by doing lockstep backups and updates. But it was just so incredibly slow.

I just want something that would give me remote access to my files with meta information about my files and a good search index.

i have been running the new owncloud (ocis) and, with some quirks and very basic functionality, it's been running for 2+ years and survived multiple updates without major complications

I dunno what you guys are doing that makes your nextcloud die without touching it. Mine runs happily until I decide to update it, and that usually goes fine, too. I don't use docker for it, tho.

I dunno what you guys are doing that makes your nextcloud die without touching it

Mine runs happily until I decide to update it

I swear every update ends up breaking it and putting it into maintenance mode for me. This would then lead to 1-2 hours of going through previously visited links to try and figure out what fixed it previously. For me personally, it seems like it's usually mariadb requiring a manual update that fixes it but it's always a little scary.

I always run occ upgrade and occ db:add-missing-indices after a package upgrade, just to be sure that I do not miss any database migrations. Using Archlinux I wrote a pacman hook so that it happens automatically.

It’s the containerization causing this imo. I also host nextcloud on bare metal and it’s quite stable

I've been reading nextcloud forums/reddit/lemmy/etc. for years now, and i feel like 90% of the problems are from people using docker or whatever easy one-click solution is out there

I've been running NC the old fashioned way for years now and i've never had problems of NC dying for no reason.

Have i had issues? Of course.... Not not like the ones people keep coming here and shitting on NC

The only times i've had major issues and it was actually a problem with nextcloud, is buggy major version releases... So i never install a new major release until X.0.1 these days. Havent really had problems since

In my own personal experience, Nextcloud;

  • Needs constant attention to prevent falling over
  • Administration is a mess
  • Takes far too long to get used to its 'little ways'
  • Basics like E2EE don't work
  • Sync works when it feels like it
  • Updating feels like russian roulette

Updating from my experience is not Russian roulette. It always requires manual intervention and drives me mad. Half the time I just wget the new zip and copy my config file and restart nginx lol.

Camera upload has been fantastic for Android, but once in a while it shits its brains out thinking there are conflicts when there are none and I have to tell it to keep local AND keep server side to make them go away.

The update without fail tells me it doesn't work due to non-standard folders being present. So, I delete 'temp'. After the upgrade is done, it tells me that 'temp' is missing and required.

Other than that it's quite stable though... Unless you dare to have long file names or folder depths.

This could be it, but I also remember reading once it might be something to do with php.ini timeout settings too

It's like...having a toddler LMAO my little digital toddler lololol

Am i the only one left who doesn't want a snap docker Kubernetes container and just installs nextcloud in a normal way and never had any problems?

Same here. I'm just installing it normally, and my nextcloud instance is just chugging along.

For me it's the opposite. I tried to use nextcloud for years, installing the normal way, and it always broke for no reason. I just started using it on docker and it has been perfect, fingers crossed.

Interesting, when I used docker on a proxmox build, it would give me trouble. Once I installed it the normal way on an Ubuntu build, it was good to go.

I wonder why that is?

Fingers crossed that it continues to work for you in the current configuration!

Because when you're using Docker, you shouldn't use Proxmox. And to be fair, I don't understand why people are using Proxmox at all.

I used Proxmox because it was free and open source with backup tools integrated into the system.

Same here, but after v25(?) it won't update on my RPi 4 any longer, think they went 64 bit only?

Other than that no issues

None. I don't make a habit of keeping "misbehaving" apps around. If I can't get to the bottom of a specific issue that app is getting the boot from my stable.

I run it and mariaDB in docker and they run perfectly when left alone, but everything breaks horribly if I try to do an update. I recently figured out that you need to do updates for NC in steps, and docker (unRAID’s, specifically) defaults to jumping to the latest version. I think I figured out how to specify version now so fingers crossed I won’t destroy it the next time I do updates.

This is probably what I'm doing wrong. I'm using linuxserver's docker which should be okay to auto update, but it just continuously degrades over time with updates until it becomes non-functional. Random login failures, logs failing to load, file thumbnails disappearing, the goddamn Collabora office docker that absolutely refuses to work for more than one week, etc.

I just nuke the NC docker and database and start from scratch every year or so.

You absolutely need to move from patch to patch and cannot just do a multiple version jump safely. You also need to validate the configs between versions, especially major release updates or you risk breaking. New features and optimizations happen and you also may need to change our update your reverse proxy configuration on update, or modify db table configuration (just puking this from memory as I've had to do it before). I don't know that there's automation for each one of those steps.

Because of that, I run nextcloud in a VM and install it from the binary package. I wrote a shell script that handles downloading, moving the files, updating permissions and copying the old config forward, symlinking and doing the upgrade. Then all I have to do is log in as administrator, check out the admin dashboard and make sure there aren't new things I have to address in the status page. It's a pain, but my nextcloud uses external db and redis and PHP caching so it's not an easy out of the box setup. But it's been solid for a long time once I adopted using this script.

Would love to take a look at that bash script (or at least a template of it) if you wouldn't mind

Here you go:

https://pastebin.com/f5tL7xwx

There could probably be some additional refactoring here, but it works for my setup. I'm using default nginx paths, so they probably look different than other installs that use custom stuff like /var/www, etc.

Use it by putting it in a shell script, make it executable, then call it:

sudo scriptName.sh 28.0.1

Replace the version with whatever version you're upgrading to. I would highly recommend never upgrading to a .0, always wait for at least a .1 patch. I left some sleeps in the when I was debugging a while back, those are safe to remove assuming it works in your setup. I also noticed some variables weren't quoted, I'm not a bash programmer so there's probably some consistency issues that could be addressed if someone is OCD.

Thank you for taking the time ! This is a great resource

For me everything works fine since years, EXCEPT collabora. I use onlyoffice now, it's much faster and very stable

Yeah I don't like auto upgrades. Everyone says it's fine but that's not my experience.

My stuff isn't public facing so I'm not worried about 0-days

Only complaints I have with Nextcloud are that it's slow and updates suck over the web interface. But apart from that it has been reliable. I'm not running it through Docker. In fact, my installation is so old that the database tables still have an oc_ prefix.

You might want to try migrating your nextcloud instance to postgres instead of mysql/mariadb. Many people says they get some big performance boost. I'm going to try it myself next weekend to see if it's true.

+1 this is exactly my experience. My install must be 5-6 years old at this point and its on the rails. I've braved many php updates...

Mine is a snap install that started 3 years ago on virtual box and was ported over to proxmox. It has never broken, updates automatically, and generally seems to work just fine.

It doesn’t load instantly, but it doesn’t drag by any means.

When I first deployed Nextcloud, it was just like this. Random crashes, lockups, weird user signin issues, slow and clunky.

But one day it just started working and was super stable. I didn't do anything, still not sure what fixed it lol.

I have nextcloud running since nearly 5 years and it never failed once. Only dowtime is when the backup fails and somehow maintenance mode is still enabled (technically not a crash)

For those interested: Running in docker with mariadb in a stack, checking updates with watchtower everyday and pulling from stable, backups with borg(matic)

I really don't understand all those posts: I use nginx, apparmor, partially even modsecurity, I use collabora office official debian package, face recognition, email, update regularly (waiting for major upgrades for every app I use to be updated), etc. and literally never had a problem in the last 5 years except for my own experiment. True, only 5 people use my instance, but Nextcloud is rock solid for me

Likewise. I have been running it for years, almost no problem that I can think of. My setup is pretty vanilla, Apache, MySQL. It's running in a container behind a reverse proxy. I keep it as up to date as possible. Only 3 people use mine, and I don't use very many apps: files, notes, bookmarks, calendar, email.

I was trying for the 3rd time to install the collabora office app in nextcloud. I think it's hilarious they know it's going to time out and they give you a bogus command to run to fix it. So unnecessarily irritating.

I've been running nextcloud since before it was nextcloud. Was owncloud then moved to next cloud.

Another user put it best. It always feels 75% complete. Sync isn't fast, gives errors that self correct when restarting the all. Most plugins are even more janky or feel super barren.

I wanted to like it so much but I stopped being able to trust most plugins which meant I had dedicated apps for those things and used nextcloud only for file sync.

If you only want file sync then seafile is vastly superior so that's what I now have.

Sounds like a common software issue. All the features where developed to 80%, and then moved on to the next feature. Leaving that last, difficult, time consuming, 20% open and unfinished.

It's the difference between more corporate or Enterprise projects and FOSS projects in a lot of ways. Even once that project matures and becomes a more corporate product the same attitude towards completeness and correctness tends to persist.

(not saying foss is bad, just that the bar tends to be lower in my experience of building software, for many legitimate reasons).

It's "cultural" in a way depending on the project.

LibreOffice wants to call with broken rendering on Windows, but the changelog mentions new tasty features. But FOSS can do it, Debian can. Those project managers should learn from their approach, whatever it is.

Yeah, I wish Nextcloud focused more on the file manager side of their applications. I was using it on my TrueNAS instance and it seems like an unfinished product. E2EE is not enabled by default and looks like their implementation is not perfect either.

Always works great for me.

I just run it (behind haproxy on a separate public host) in docker compose w/ a redis container and a hosted postgres instance.

Automatically upgrade minor versions daily by pulling new images. Manually upgrade major versions by updating the compose file.

Literally never had a problem in 4 years.

I'm still too container stupid to understand the right way to do this. I'm running it in docker under kubernetes and sometimes I don't update nextcloud for a long time then I do a container update and it's all fucked because of incompatible php versions of some shit.

I don't remember much about how to use kubernetes but if you can specify a tag like nextcloud:28 instead of nextcloud:latest you should have a safer time with upgrades. Then make sure you always upgrade all the way before moving to a newer major version, this is crucial.

There are varying degrees of version specificity available: https://hub.docker.com/_/nextcloud/tags

Make sure you're periodically evaluating your site with https://scan.nextcloud.com/ and following all of the recommended best practices.

Kubernetetes is crazy complex when comparing to docker-compose. It is built to solve scaling problems us self-hosters don't have.

First learn a few docker commands, set some environment variables, mount some volumes, publish a port. Then learn docker-compose.

Tutorials are plenty, if those from docker.com still exist they're likely still sufficient.

Installed Nextcloud-AIO using the docker script, took about 4 - 5 terminal commands. Practically zero issues! Hopefully someone else can provide some help in the thread!

Do you have office set up in it?

I have it set up. Try the AIO docker image. Once you get it set up, it pretty much just works. You just pick which office suite you want, check a few optional features if you want 'em, and it handles the rest for you. Most importantly, the AIO image is from nextcloud. They test it, it always works because it is the blessed version from them. If you're not a Linux guy, don't try the other installation methods, they're much, much more difficult.

I'll give it a shot. I've tried so many different approaches already. I think I maybe tried to install AIO straight onto a linux vm; don't recall how it got derailed. I did build a Lubuntu VM for experimentation. I really wanted to get an Ollama chatbot running to assist me in my future digital endeavors, but it just wouldn't come together.

This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.

I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.

If all you want is files and sharing try Seafile

That’s what I’ve got running now, and for me Seafile is been rock solid.

Nextcloud has been super solid for me using the official docker image.

The problem child for me right now is a game built in node.js that I'm trying to host/fix. It's lagging at random with very little reason, crashing in new and interesting ways every day, and resisting almost all attempts at instrumentation & debugging. To the point most things in DevTools just lock it up full stop. And it's not compatible with most APMs because most of the traffic occurs over websockets. (I had Datadog working, but all it was saying was most of the CPU time is being spent on garbage collection at the time things go wonky--couldn't get it narrowed down, and I've tried many different GC settings that ultimately didn't help)

I haven't had any major problems with Nextcloud lately, despite the fragile way in which I've installed it at work (Nextcloud and MariaDB both in Kubernetes). It occasionally gets stuck in maintenance mode after an update, because I'm not giving it enough time to run the update and it restarts the container and I haven't given enough thought to what it'd take to increase that time. That's about it. Early on I did have a little trouble maintaining it because of some problems with the storage, or the database container deciding to start over and wipe the volume, but nothing my backups couldn't handle.

I have a hell of a time getting the email to stay working, but that's not necessarily a Nextcloud problem, that's a Microsoft being weird about email problem (according to them it is time to let go of ancient apps that cannot handle oauth2--Nextcloud emailer doesn't support this, same with several other applications we're running, so we have to do some weird email proxy stuff)

I am not surprised to hear some of the stories in this thread, though. Nextcloud's doing a lot of stuff. Lots of failure points.

Never had a single functional problem with Nextcloud, other than the fact that it's oppressively slow with the amount of files I've shoved into it. Mind you I also don't use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the "Recommended DB" for Nextcloud it works perfectly for me. Maybe that's the difference.

Postgres is the standard db in the AIO container nextcloud has put out as their standard.

The new Linuxserver.io docker image at the very least has solved the annoying update cycle NextCloud has and seems to have fixed the need to do that every few months. I haven't ever had it die but I don't push it hard and I keep the plugins to a minimum because I just don't trust it and it doesn't run all that well.

I've setup Nextcloud but have done next to nothing with it.

My Lemmy instance gives me the most problems, but it's also the only publicly available service I run. Mostly the issue is it seems to have a memory leak that forces me to restart it every few days.

Everything else has been completely rock solid for me, running on a mini pc (formerly a pi4 until I wanted to start doing stuff with Jellyfin and needed more power for transcoding) on OpenSUSE Leap all in docker containers. Makes it insanely easy to move stuff. I had no issues basically just copying the docker-compose files and data and bringing them up even when switching architectures.

I've just finally and fully spun down a proxmox server I've been running and updating as my home lab for six years.

Every major update seemed to break something. Upgrades were always a roll of the dice as to whether it would even boot. It's probably at least partially my fault for using an old R710 and running docker directly on the OS instead of within a container, but it was still by far my least reliable piece of kit.

The last apt update removed sudo, and I can't be arsed to rebuild, so I've moved the critical bits to a fleet of SBCs. Powering that fucker down was a huge relief.

For years, I had an unstable unraid server. I was fixing it every couple of days after a lockup. I had decided that unraid sucked. When it was up for a week I celebrated. Every one of my dockers was a suspect. I learned to hate all of them.

Then I shitcanned the next cloud docker.

Been up for months without a hiccup.

Well... no... I have been self hosting it for several years over multiple major versions now. Only for Files, Calendar and Deck though. It was a bit hard to set up, but reading the general Apache and PHP documentation helped a lot.

Perhaps ironically, lemmy. I had the database catastrophically fail early on, and ever since then federation has been broken with most major instances. I kind of prefer lotide anyway, much more minimalistic, less of a focus on upvotes and downvotes, and the code base is simply enough that I've been able to hop into it and make changes.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CA (SSL) Certificate Authority
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LAMP Linux-Apache-MySQL-PHP stack for webhosting
LXC Linux Containers
PiHole Network-wide ad-blocker (DNS sinkhole)
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
nginx Popular HTTP server

10 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #392 for this sub, first seen 1st Jan 2024, 02:35] [FAQ] [Full list] [Contact] [Source code]

Dude- it's like you're reading my mind. I've installed Nextcloud 4 different times, the most recent being on docker desktop in Win11. I've resorted to using chatgpt to help me with the commands. LITERALLY EVERY STEP RESULTS IN AN ERROR. The Collabora office suite (necessary to view or edit cloud docs without downloading them) WILL NOT DOWNLOAD. The "php -d memory_limit=512M occ app:install richdocumentscode" chatgpt and Nextcloud suggest is not recognized by the terminal. You can't just download Collabora, cuz fuck you, i guess, and you can't access Docker's actual file system from windows explorer.

I've typed nonsense into various black screens for upward of 20 hours now, and nextcloud is "working" locally. I can access my giant hard drive from my android nextcloud app, but it's SLOW AS FUCK.

I can't imagine how many man-hours it would take to open the server to the internet. Makes me want to fucking barf just thinking about it.

I've been fucking with Linux since 2005 and have yet to get a single thing to work correctly. I guess I'm the only one who thinks an (mostly) invisible file system in incomprehensible repetitive folders, made of complete nonsense commands might not be the best way to operate a computer system.

I'm really frustrated if you can't tell.

On another topic, trying to get Ollama to run on my Lubuntu VM was also impossible. I guess if everyone knew it was going to force you to somehow retroactively configure every motherfucking aspect of the install nobody would bother. You can sudo all day and it still denies me permission to do things LISTED IN THE MOTHERFUCKING DOCUMENTATION.

Is this all just low-effort poorf** bullshit that doesn't actually work?

This got mee googling Nextcloud and I think I'm going to give it a try 😱

Seriously homie, unless you're a fucking linux docker nerdshit wizard, you should find another way.

Thanks for the warning 🙂 Sometimes I still think I have as much spare time as 10 years ago 😉

You could be a legless NEET and not have enough time to get this fucking bullshit to work correctly.

Same with my arch install, didn't touched it for 2 months even though laptop was turned off it decided to die when i launched it and run pacman -syu

I regularly "deep freeze" or make read-only systems from Raspberry Pi, Ubuntu, Linux Mint LMDE and others Linux Distros whereas I disable automatic updates everywhere (except for some obvious config/network/hardware/subsystem changes I control separately).

I have had systems running 24/7 (no internet, WiFi) for 2-3 years before I got around to update/upgrade them. Almost never had an issue. I always expected some serious issues but the Linux package management and upgrade system is surprisingly robust. Obviously, I don't install new software on a old system before updating/upgrading (learned that early on empirically).

Automatic updates are generally beneficial and helps avoid future compatibility/dependency issues on active systems with frequent user interaction.

However, on embedded/single purpose/long distance/dedicated or ephemeral application, (unsupervised) automatic updates may break how the custom/main software may interact with the platform. Causing irreversible issues with the purpose it was built for or negatively impact other parts of closed circuit systems (for example: longitudinal environmental monitoring, fauna and flora observation studies, climate monitoring stations, etc.)

Generally, any kind of update imply some level of supervision and testing, otherwise things could break silently without anyone noticing. Until a critical situation arises and everything break loose and it is too late/too demanding/too costly to try to fix or recover within a impossibly short window of time.

I'd say that it's your fault for running a system upgrade after 2 months and not expecting something to break but it's not that unreasonable either

I disagree--a system (even Arch!) should be able to update after a couple months and not break! I recently booted an EndeavourOS image after 6 months and was able to update it properly, although I needed to completely rebuild the keyring first

Arch and EndeavourOS are the same thing. There is no functional difference between using one or the other. They both use pacman and have the same repos.

Very true--the specific EOS repo has given me a bit of trouble in the past, but it takes like 3 commands to remove it and then you've got just arch (although some purests may disagree 🤣)

I'm using opensuse tumbleweed a lot - this summer I've found an installation not touched for 2 years. Was about to reinstall when I decided to give updating it a try. I needed to manually force in a few packages related to zypper, and make choices for conflicts in a bit over 20 packages - but much to my surprise the rest went smoothly.

I know this is how it's supposed to be and how it should be but sadly it doesn't always go this way and arch is notoriously known for this exact problem, the wiki itself tells you to check what's being upgrades before doing because it might break. Arch is not stable if you don't expect it to be unstable.

The solution for me is that I run Nextcloud on a Kubernetes cluster and pin a container version. Then every few months I update that version in my deployment yaml to the latest one I want to run, and run kubectl apply -f nextcloud.yml and it just does its thing. Never given me any real trouble.

I've been updating Nextcloud in-place (manually) for multiple major versions without any flaws. What is the problem?

Yea I've been using nextcloud for a while and it's fine.
I remember when I used owncloud before nextcloud was even a thing and the upgrade experience was absolute shit.
These days it's just fine.

Works great for me. I had it running in a snap for awhile, but now I just have it in a proxmox Debian container running a LAMP stack. I have over a terabyte of stuff saved and multiple computers syncing too, so its well used.

Installed it in k3s and then pulled up the Android app but all it does is say every single file is a duplicate and overload my notifications tray while not uploading anything

I've hosted mine for years on my own bare metal Debian/Apache install and 28 is the first update that has been a major pain. I've had the occasional need to install a new package to enable a new feature, or needed to add new/missing indices to the database, but the web interface literally tells you how to do those things, so they're not hard.

28 though broke several of the "featured" apps that I use regularly, like "Retention". It also introduced some questionable UI changes that they had to fix with the recent .1 update. I'll get occasional errors when trying to move or delete files in the web interface and everything. 28 really feels like beta software, even though we're a point release in and I got it from the "stable" update channel.

I've not moved to 28 yet, might wait a bit longer from your post. My 27 is rock solid, I don't understand why so many have issues with nextcloud.

Maybe the docker installs are pants

I'm on my laptop so I thought I would elaborate on my first comment to give you things to watch out for if/when you update. I've been hosting mine with the zip file manually installed with my own Apache/PHP/MySQL/MariaDB setup for ages now without issue. It's been rock solid except for, like I said, the occasional changes required to take advantage of new features such as adding new indices to the database or installing an additional php addon. Here's the things that I noticed with updating to 28.

  • The 3 dot/ellipses menu was missing in the web interface and was replaced with dedicated buttons for "Download", "Add to Favorites" and "Delete". Shift clicking was also broken. This meant that when I, for example, take a lot of photos for a holiday, I can't use the web interface to select a large range of multiple files and then move them all from "InstantUpload" into a more permanent album. I either had to use the mobile app, or do them one at a time. The ellipses menu, along with the options to bulk "move/copy" have been added back since then with the *.1 update, but shift clicking in the web interface to select a range of files is still broken.
  • The "Retention" app, which is listed as a "Featured" app doesn't function any more. I used it to automatically delete backups of my Signal messenger, files in the "InstantUpload" folder that were over a year old, etc. You can enable it, but it doesn't actually work and just throws errors in the log file, which is now reported in the "Overview" portion of the "Administration" page with a note of "X number of errors since somedate", and prevents you getting the green checkmark. It's probably safe to assume that other apps will also have issues because I had half a dozen get automatically disabled with the update.
  • Occasionally when I use the web interface to move or copy a file, I'll get an error message that the operation failed. Sometimes this is true, sometimes it's not and the operation actually succeeded. If it ends up being true and the move did actually fail, doing it again results in a successful move.

It seems like they've made some substantial under-the-hood changes to the user interface that shouldn't have been shipped to the "stable" channel. It's not completely broken, it "is" usable, especially after they restored my bulk move/copy button, but I still can't use the Retention app, at least last time I looked, so I've literally got daily cron scripts to check those folders for old files and delete them, then trigger an occ files:scan of the affected directories to keep the Nextcloud database in sync with the changes. This however, bypasses the built-in trash bin so I can't recover the files in the event of an issue. I actually considered rolling back to 27 for a bit, but decided against it, so if I were you, I would stick with 27 for a while and keep an ear to the ground regarding any issues people are having that are or aren't getting fixed in 28.

Thanks for the heads up, will wait for 28.0.2 as that is currently cooking.

On the Retention app thing, I got into tagging to remove old backups. Will have in the morning for how I set it up

I have run nextcloud:latest on Docker for the last 2 years and have had 0 problems. Maybe upgrading all the time works better than by releases.

The only device running Snap in my house is a Raspberry Pi running the Snap Nextcloud and it's rock solid.

This might be a deployment issue. How are most people running it?

I use docker and I get issues sometimes. I will admit though, when I used the snap a few years back I had no issues whatsoever.

Yeah the Docker version hated me, mainly due to it sometimes getting a bit behind on updates and then having schema mismatches if I ran an update in that missed the previous one. No issues with the Snap thus far

I used to have this problem. I started pulling a version number (like 27) instead of "latest" so that I could just pull minor releases when I did updates, and then I manually step up the version in the docker-config file for major versions when I'm ready for them. (I don't like to pull a major release version until there's been 1 or 2 maintenance releases since my nextcloud is fairly critical for my family)

I can’t remember the last time I laughed this much

I won't update without first creating an image of the server to roll back to. Like others on here, the web updater almost always fails and goes into maintenance mode and I have to ssh in to fix it.

Having said that, functionally, I have no issues. Only when upgrading does the whole thing shit the bed.

I gave up on owncloud just before it became nextcloud because it kept breaking every time I updated it.

Wallabag is similar for me now. I'm stuck on a slightly out of date version because I can't get newer ones to run. Everything else I self host is painless though.

I haven't had any issues with Nextcloud yet. But any torrent client refuses to work. I've tried various qbittorrent containers, transmission, deluge briefly, they all work for a while but eventual refuse to do anything.

That sounds more like a network problem.

I didn't realize that next Cloud was so bad, might I recommend people having issues try Seafile? Also open source and I've been using it for many years without issues. It doesn't have as many features and it doesn't look as shiny but it's rock solid

Have a random meme from my instance

https://seafile.kitsuna.net/f/074ad17b12ad47e8a958/

Nextcloud ist just fine. Using it since more than 7 years now with zero problems

I'm having a hard time believing that.. There is a difference between being able to fix the update issues every time without problems or having no problems at all. But if so, neat.

Well dang, I have Nextcloud installed as a snap (which has been perfectly stable for me when running on Ubuntu Server), but I was thinking of switching over to a docker installation; this thread doesn't exactly fill me with enthusiasm for that idea...

Anecdotal, but Ive had a container running Nextcloud in an LXC on Proxmox along with PiHole, Step CA, Bacula, and quite a few other services and I've had zero downtime since June 2023. Even have Tailscale rigged to use PiHole as the tailnet DNS to have adblocking on the go.

Guess that restart: always value in the Compose config is pulling it's weight lol

I ended up on the snap because I couldn't get the AIO install working properly. My snap version has been super solid. I think I'm gonna stick with it for a while.

Nextcloud for me too, would break because of updates requiring manual DB updates sometimes, apps would randomly stop working after updating too, or the 2 times it caused total data loss on all my synced devices and the server itself which required a full restore from backups.

After getting rid of it and switching to Syncthing + Filebrowser + SFTPGo for WebDAV I haven't really had anything break since then (about a year now). Stuff also runs much faster, NC was extremely slow even on good hardware with all their recommended settings for performance.

If Nextcloud "caused total data loss on all my synced devices and the server itself" I would probably do something unsavory to any responsible party I could locate, and take 10 TB of data out of their lousy hide.

Yeah the first time was the time/date bug they had (still have?) where it set the time on every folder and file to 00/00/0000 00:00 across all clients and the server.

Second time was I disabled virtual file support on my laptop so it would sync everything, but instead it went and wiped all the files from the server, because for some reason their sync client assumed the laptop that now had no files on it should be the master source or something.

Their own docs even state that's how you're supposed to disable VFS, with no mention that it will wipe your server clean.

It would be absolutely sublime if it worked. Literally every step resulted in an error. EVERY STEP.

I had TOTP die for one user on my Nextcloud. I tried to disable it, but it "didn't exist". I tried to enable it, but it was already enabled. It would come up when I used occ twofactorauth:state user. I ended up fixing it by (force) disabling the app and re-enabling it. It didn't break any other user's TOTP and it fixed problem-user's TOTP. No idea what went wrong, but I get these random issues with Nextcloud sometimes.

The plus side to this is I've learnt how to use Mariadb and I've gotten better at debugging things.

I’m not self hosting an instance, but kbin is super fucking broken lately and it’s getting really frustrating. It’s been about a week. I submitted a ticket in their Git repo, but no response.

The most-recent release of lemmy dicked up outbound federation pretty badly on the instance I use.

The snap version of nextcloud has been pretty solid for me, except for the time that I installed the nextcloud backup app.

Invidious. It's to be expected for something like that though.

I wish there were an alternative in a sane programming language that I could actually contribute to. For some reason PHP is extremely sparse in its logging and errors mostly only pop up on the frontend. Having to debug errors after an update and following some guide to edit a file in the live env that sets a debugging variable, puts the system in maintenance mode and stores additional state in the DB is scary.

Plus PHP is so friggin slow. Nextcloud takes noticeable time to load nearly anything. Even instances hosted by pros that only host nextcloud are just slow.

CC BY-NC-SA 4.0 🎖

You could check out Frappe Drive (and Frappe, the framework it's built on, it's pretty awesome). They aren't accepting contributions at the moment but I'm sure that'll change once it's out of beta like with the other frappe apps. There's also Raven messenger also built on Frappe and you can use the two together (but without any real integration between the two yet, but that's on the roadmap on the Raven side).

I've spent a lot of time researching alternatives and NextCloud is the only one that does everything it does in one place. I've dug into the code a lot to find places to make it work faster and came out confused and mostly empty. It's also federated, and I think it's the only FOSS file sharing platform that is. It''s a very mature application so you'll be hard pressed to find features that are missing, but also to find things that could be further optimized without ripping out major chunks of the application which are likely interconnected with other major chunks of the application. For my personal use NextCloud instance I've resorted to just completely deleting the database and installing everything fresh between major versions, then just rescanning my local folder.

Currently dealing with this nonsense,

and this accompanying nonsense:

Why is your Collabora server on local host? Local host will always point to the device you are trying to access from. You need a publicly accessible URL

"Local host" in this instance refers to my desktop computer where all my super sweet Linux distros are saved. Nextcloud Office, while being an "app" appears to not have any function without the collabora; i.e. there will be no document viewer without the collabora "server" running next to NextCloud.

Or maybe it's none of that. Coming from a Windows background, running docker is completely foreign.

Collabora need to be accessible at the URL you provide. As a example one might have nextcloud at nc.example.com and Collabora as cb.example.com. you would need to enter cb.example.com as the URL.

The easier way it to use the Nextcloud all in one image or the build in Collabora. Its not going to be as robust or fast but its much simpler.

To be honest, no. I run in a Truenas Jail, and its stable for me. Just a bit slow for big files sometimes.

For me it’s Pi-hole. For six months it runs fine, then dies so horribly I resort to snapshot rollback and we both pretend it never happened.

Weird. I've had a Pi-Hole + Unbound running on a Pi Zero since 2018 and it's never had any issues. I expected the Zero to kinda suck but it has been nothing but smooth sailing. It gets USB power from my router and even if my router reboots the Pi also auto reboots itself.

I do next to no maintenance on it and it just keeps on chugging along. Maybe once every six months or so I SSH in and do a pihole -up and that's it.

The very same reason why I gave up on Nextcloud. Too many nasty surprises.

Did you find a self-hosted solution?

I found a service that syncs our calendars self-hosted. That was the only thing that was missing. Can't remember the name, works flawlessly and without any problems for a number of years now. If you are interested, I'll look it up next weekend.

I want my docs and files on a self-hosted cloud (I can't seem to get sftp, ftp, or sharing to work on windows 11 even after adding the missing features) , with the ability to at least open the contents without downloading them. I want to stop using google for calendars and notes, and it would be handy to have a self-hosted bulletin board I and my added users could write on.

According to the box, nextcloud does all these things, except that it doesn't, without practically rewriting the code and somehow re-engineering linux to not be a fucking cunt.

When you are working locally, why don't you use Samba for storing and sharing of documents?

I've tried and tried. It just won't work. Maybe I need to get a different firewall program. I'm working in Pro, added the features, made the firewall exceptions, have my network setting as "private," I've done everything. The host will be visible on the network, but logins time out or fail altogether.

Since writing my rant, I found HFS, which, though an OLD program, was stupid stupid easy to set up.

I also found Filebrowser, and though the config was way more of an asspain than it should have been, it's fucking awesome. I've even moved on to trying to get HTTPS running for external connections using Win-Acme, but it isn't going well.

Please do! I had spent solid day researching open source CALDAV server/clients to replace Google calendar for my boss. Almost no options on that front.

I have used Baikal for caldav for the server, with davx5 on Android. Was solid. Moved to NC for files, so went ahead with calendar sync on NC too. NC calendar sync has already worked well for me, no hiccups.

The only issue I've had with NC is auto upload of photos from my phone. It constantly has conflicts. Otherwise sync of regular files works great.

Paperless often randomly stops accepting new documents. I have to wait several hours or restart it.

Yep. Got such a service as well. I've got this one docker container that's supposed to connect to a VPN and provide access from the outside to another one. The bitch keeps just crashing to a point where even "restart policy: always" will give up on it. Doesn't matter too much usually, since I can start the container before I need it, and it will usually run for half a day or so, yet still

It is fine, but then again I update it often too late which is actually pretty bad. The problem is Nextcloud pushes new features and a high frequency schedule of releases with those at an alarming rate of speed. Perhaps for corporate environments it is not as big of a deal as a professional team can fix obscure bugs with their knowledge and experience on their mirrored test servers, but home users don't have these resources available and public community knowledge and bug fixes need time which that release schedule hinders.

I still wouldn't say it is bad by default, simply because somehow it runs pretty stable for me since a decade. Updates are a pain though with many breaking changes and little bugs.

Not using Nextcloud. Found it a bit difficult to deploy and maintain than OwnCloud. Since then, I haven‘t had any problems with OwnCloud.

Bad stories about nextcloud scare me 😂 I hope Im not gonna jinx myself, but my nextcloud runs super stable for almost a year. I get some errors while updating, but service doesnt stop working and its usually simple fix by following the message it shows.

I removed apps that I dont use (most of them) and web ui became super fast on my budget server

Actually all services are so smooth and almost no issues, maybe beginner luck 😉

I must be in the minority. I don't trust swarm syncing or the cloud.

Nextcloud can be self hosted... It's not really "the cloud". Can be LAN only if you want

I'm with you.

Local everything I possibly can.

My Nextcloud has been flawless. The only issue I've had was NFS permissions. I have automatic update setup for docker so it stays up to date.

Care to share what broke?

This is Seafile for me. Definitely not the "set it and forget it" Google Drive alternative I was hoping for. Thank goodness I have Syncthing backing up important files, but sharing with friends and family is a nuisance.

My wonderful MongoDB powered, old as fuck mFi vm. It's running on Ubuntu 14 because that's the last supported version and Ubiquiti abandoned this shit decades ago. It's set to restore and reboot once a month. That usually keeps shit working lol

Please tell me you don't connect that to the internet

I like to imagine it being pickled like Ozzy or Keith Richards.

Haha fuck no.

EDIT: I kind of wish i had said yes just to spice things up.

I run a k3s Kubernetes cluster on a single KVM host(multiple VMs). Honestly I do not care a single f*ck about that machine nor k3s itself. I update once a year, do not have any documentation written nor IaC somewhere. I always forget how I configured the networking stuff for example. But that machine runs my critical services flawlessly without a single crash in like 3 years. So no I cannot relate.

I don’t, I just keep running out of memory on my servers….

What we need is something that is a) Private (not saying nc isn't) b) Independent of any judicial government c) P2P and ultra redundant d) Run by a true non-profit (not like openAI) e) Massively distributed, process wise and storage wise f) OS independent, written in pure C or Rust.

This has been a serious concern of mine. In the event that I prematurely die I have everything set up with automatic updates, so that hopefully my family can continue to use the self-hosted services without me.

Nextcloud will not stop shitting the bed. I'd give it a few months at most if I died, at which point my family would likely turn back to Google Drive.

I'm looking for a more reliable alternative, even if it's not as feature-rich.

Take that as you want but a vast majority of the complaints I hear about nextcloud are from people running it through docker.

Does that make it not a substantive complaint about nextcloud, if it can't run well in docker?

I have a dozen apps all running perfectly happy in Docker, i don't see why Nextcloud should get a pass for this

I have only ever run nextcloud in docker. No idea what people are complaining about. I guess I'll have to lurk more and find out.

See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another "heavy" container like Gitlab not also result in the same outcome?

Things should not care or mostly even know if they're being run in docker.

Well, that is boldly assuming:

  • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn't know, and indeed, doesn't care about redundancy and wasting storage and memory

  • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process

  • that those images are configured according to your actual end-users needs, and not to some packager's conception of a "typical user": do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

  • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

And this is even before assuming that docker abstractions are free (which they are not)

Most containers don't package DB servers, Precisely so you don't have to run 10 different database servers. You can have one Postgres container or whatever. And if it's a shitty container that DOES package the db, you can always make your own container.

that those images are configured according to your actual end-users needs, and not to some packager's conception of a "typical user": do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS.... what are you on about? They're not locked up little boxes. You can edit the config files, environment variables, whatever you want.

Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.

Well, that's not the case of the official Nextcloud image: https://hub.docker.com/_/nextcloud (it defaults to sqlite which might as well be the reason of so many complaints), and the point about services duplication still holds: https://github.com/docker-library/repo-info/tree/master/repos/nextcloud

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…

True, but how large do you estimate the intersection of "users using docker by default because it's convenient" and "users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed"?

I'm not saying it's not feasible, I'm saying that nextcloud's packaging can be quite tricky due to the breadth of its scope, and by the time you've given yourself fair chances for success, you've already thrown away most of the convenience docker brings.

Docker containers should be MORE stable, if anything.

and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn't magically make things more efficient…

Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn't have any issues running the container. (You'll still have to configure some things, of course)

Wait so it's not just that my vps only has 1gb of ram?

You guys with more ram still get crashes?