themachine

@themachine@lemmy.world
0 Post – 47 Comments
Joined 1 years ago

Yes and what exactly is a radical feminist to you

12 more...

But the article explains that there is a technical reason.

43 more...

Debian for all things.

1 more...

Just look at the bit rate of what you are streaming and multiply it by 3 then add a little extra for overhead.

Ate you asking about why private trackers are private or are you asking about why a handful of people were mean to you who also happened to use a private tracker?

2 more...

Assuming the implementation is done in such a way that I am not indirectly owned by the manufacturer of the BCI and am capable of maintaining its software and firmware myself...yes yes absolutely yes stick that shit in my head.

But if it is not open source and I'm expected to be tied to some corporate entity just to utililze it, no, absolutely not.

Care to cite your sources in that claim? I'm know they are far from anything that could be considered "good" but "worse than cigs" is news to me.

17 more...

As others have said, cool concept, awful name.

Bad name aside Windows only client support is a big letdown and makes the application useless to me.

The primary reason a private track is private is to make it feasible to maintain a curated community. Many users are not good torrent citizens. Many users are not good netizens in the first place. More than a few will look to actively do harm. Keeping a mostly closed community allows the vetting of users and those who end up breaking the rules are dealt with swiftly.

The extra barrier of entry also helps prevent bad actors from operating on the site. This is of course not a full proof thing but it is obviously much better than a public site.

Additionally running a private tracker and site takes server resources that are not free. Limiting the total number of users is a way of maintaining uptime by staying within your operational limits.

I'm sure there are other benefits for private trackers but these are at least a few.

I am not going to explain why someone on the internet was mean to you. Given the tone of this post I wouldn't be surprised if it was deserved.

You're probably better off looking for hardware to meet your spec requirements and then looking into its Linux support.

6 more...

Ah, I was wondering why I couldn't get it to detect my yubikey. I saw keepassxc-full in the repo but that also didn't seem to work. I'll have to revisit it.

Debian

Trust no one absolutely.

You tried what exactly earlier today?

22 more...

It was less so "they aren't interested" and more so "we don't have time right now will and the buf fixes that need implementing".

Your title is about backups but your question seems mostly just about how to set up your storage for backups.

You can go about pooling disks in a few ways but you first need to define what level of protection from failure you want. Before going further though, how much space do you project that you will need for backups?

2 more...

Trust no one. Not fully at least.

2 more...

Just an FYI to you and anyone else who might read this but you don't even have to link a PSN account for cross play. I have no PSN account and cross play works just fine.

1 more...

What exactly do you mean by "not mountable"?

5 more...

Depends on which tracker. In general though invites and you get invites by association with relevant groups. Sometimes open registration occurs (though rarely) an other times the trakcer may do interviews.

It mostly comes down to time and patience.

9 more...

If you want simple you'll have to manually decrypt each time it needs doing.

If you want it to be "automatic" then your best bet is something network based. A "simple" would be to just have a script ssh's somewhere, pulls the decryption key, and then decrypts the disks. There's plenty of flaws with this though as while a threat actor couldn't swipe a single encrypted disk they could just log in as root, get your script, and pull the decryption key themselves.

The optimal solution would be to also encrypt the root partition but now you need to do network based decryption at boot which adds further complexity. I've previously used Clevis and Tang to do this.

I personally don'tencrypt my server root and only encrypt my data disks. Then ssh in on a reboot or power event and manually decrypt. It is the simplest and most secure option.

Replace existing online services you use with self hosted ones.

I'm not entirely sure what the actual question is. Can you rephrase what exactly you are trying to accomplish?

7 more...

I backed this: https://www.crowdsupply.com/cool-tech-zone/tangara

Its not yet released and is in the manufacturing process but I think it's worth considering

I've had one for a while now and overall I'm happy with it. The screen and camera are as good as some other devices and it doesn't support all of some bands that US providers use so service coverage may vary. I should also add that the touch sensitivity is a little off. I'm not sure if thats software or hardware to blame though.

I'm on a T-Mobile reseller and excluding situations like being inside a data center or being outside of town camping or whatever my service has been acceptable. Its also less an issue for me as in almost always in WiFi range.

I don't think the phone is upgradable. It is repairable though. The fact that it has an easily removable battery is enough to justify the device for me as glued in dead batteries have historically been my biggest issue with device longevity.

2 more...

Error message? Nextcloud logs?

Can't tell you whats happening without information about what's happening other than "it doesn't work".

2 more...

Add a test folder, add some data, delete the test root folder and see if it deletes the data.

If the total data is 3tb and you want disk failure protection I would take your two 6tb disks and put them in a mirror. With the amount of data you have and the drive sizes at your disposal that makes the most sense. This leaves you with 3tb free for growth. If you wanted an additional backup I would recommend storing it in a different location entirely or pay a cloud provider like Backblaze.

I would do this with ZFS but you can also do this via LVM or just straight md-raid/mdadm. I'm not sure what your issues are with zfs on popos but they should be resolvable as Ubuntu supports zfs fine to my knowledge.

An alternative you could consider is using mergersfs to logically pool indivial filesystems on each of the disks and then use SnapRAID to provider some level of protection. You'll have to look into that further if interests you as I don't have to much info in my head related to that solution. Its not as safe as a mirror but its better than nothing.

Symlinks? Pretty sure that exists on windows.

1 more...

I prefer restic for my backups. There's nothing inherently wrong with just making a copy if that is sufficient for you though. Restic will create small point in time snapshots as compared to just a file copy so I'm the event that perhaps you made a mistake and accidentally deleted something from the "live" copy and managed to propagate that to your backup it is a nonissue as you could simply restore from a previous snapshot.

These snapshots can also be compressed and deduplicated making them extremely space efficient.

I don't see how thats relevant.

6 more...

I do exactly this as well.

Symlinks likely wouldn’t work for a torrent, because that’s more like a shortcut; The symlink doesn’t actually point to the file, it just points to another filepath.

They are kinda like a shortcut but they are resolved directly by the filesystem and in the fast majority of cases should work perfectly fine if done correctly. In OPs case I'd probably leave the original file intact and create the link at the new desired destination.

You can’t have a hardlink for your C: drive on your D: drive

Thats why I didn't recommend hardlinks. But I misread OPs post and I see the data will all live on the same drive so I revise my original suggestion and also recommend hardlinks.

But a torrent client likely won’t be able to handle the “oh actually you need to go visit location B” instructions, and will just crash/freeze/refuse to seed.

You're just pulling that out of your ass.

*all of this is largely under the context of linux but should translate to windows

I just recently learned this.

For OsmAnd, go to search, then the categories tab, and then hit "Online Search".

Voila, address lookup.

So you believe that the performance improvement and power saving is not worth creating a new standard?

1 more...

Why not just run a reverse proxy container on the server hosting the rest?

3 more...

Best practices comes down to what you do or do not want the VPN clients to access. This mostly comes down to routing and firewall rules.

So, what should your users have access to?

Also what is the vpn?

4 more...

In the scope of wireguard it'll just be a matter of you building appropriate firewall rules.

Since you want their internet traffic to go through you then i assime you're effectively pushing a 0.0.0.0/0 route to your clients. You then need to add firewall rules on your server to block traffic to its local subnet and in the future allow traffic to only your jellyfin server.

This is also pretty simple and nothing wrong with that setup.

Have you ever had an issue that you had to get support for? Whether it's asking fairphone for help or just searching online for answers, did you have any trouble?

Hmmm. I don't think so. I had some weird issues with audio on phone calls at one point but I think that was not due to the phone and more so due to LineageOS, a third party OS.

I worry that the disassemble-able design could make the phone less drop resistant, have you experienced that?

Well I don't drop my phone but I also don't feel like its construction lends it to being overall weaker. I also keep it in a case and with a screen protector on though.

I mean I'm not sitting here defending soldered on ram but your unnecessary aggression and sarcasm in your previous responses overshadows the fact that while solder on ram sucks for the upgrade and repair market the underlying tech has very tangible improvements and now we can maintain that improvement and the upgrade and repair functions.

I agree, soldered ram is bad. But I disagree that LPDDR ram is fundamentally bad and this improvement allowing it to be modular while maintaining its improvements is a very good thing.

As far as your complaints of battery life on your thinpad goes, there is much more to battery life than the consumption of the memory but naturally every part plays a role and small improvements in multiple places result in a larger net improvement. I'm assuming you're running linux which in my experience has always suffered from less than optimal power usage. I'm far from an expert in that particular area but its always been my understanding that it is largely caused by insufficient fireware support.

As a whole this looking at this article in a vacuum i only see good things. A major flaw with lpddr has been address and i will be able to expect these improvements in future systems.