pootriarch

@pootriarch@poptalk.scrubbles.tech
3 Post – 24 Comments
Joined 1 years ago

a beautiful robot, dancing alone · showgirls über alles: kylie, angèle · masto · last.fm · listenbrainz · lovekylie

long ago i shifted to vscodium, a packaged version of only the open-source base of vscode that provides most but not all of the available extensions. for two reasons: so that i didn't leak telemetry to m$, and so that i wouldn't get used to features that aren't open source. it's available in a lot of package managers, mac/windows as well as linux

i did try that but the never-dark mode blinded me. i understand the reasoning, but absolute anonymity isn't my own threat model; i'd like to be able to use themes and resize the window

3 more...

you probably already found this, but for others who might be curious:

https://molly.im/

https://github.com/mollyim/mollyim-android

thanks, i'll look again. it's not that i love the idea of being fingerprinted; i just think that five mylar bags, four tin hats and a partridge in a pear tree won't save me from that. i need my password manager, and once that's in, enforcing a generic screen is silly - cow's out of the barn. but not having the arms race against pocket and telemetry would be a big bonus.

neo store refuses to run if you don't grant it the right to send notifications and bypass battery optimizations. if an app demands a permission and doesn't have a plausible explanation why it needs it, i don't keep it :/

1 more...

It exists, it's called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

the internet archive doesn't respect robots.txt:

Over time we have observed that the robots.txt files that are geared toward search engine crawlers do not necessarily serve our archival purposes.

the only way to stay out of the internet archive is to follow the process they created and hope they agree to remove you. or firewall them.

https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/

i'm shopping for mp3 players for precisely this reason - a friend has an ipod touch that abruptly stopped scrobbling. the last.fm app is stuck in a loop sucking battery. and she needs bluetooth anyway. she has always kept music and phone separate but now we have to ask the five whys on that before getting her a new unfamiliar gadget.

against which kitchen pots have proven surprisingly useful elsewhere. against all odds

again not foss so won't dwell at length — but i use fund manager from beiley software. commercial, but works double-entry and handles more investment complexity than a human could ever need. windows app, i run it under wine on linux and crossover on mac. (i don't own a windows box — that's how irreplaceable it was for me.)

just like heaven could have its own thread, practically its own community

in the settings if you change notification method from websocket to unified push, the UP settings come up, including a server address (which is what they intend to be used) or some air gap mode that i can't find documented

an interesting oddity: on my non-rooted xperia, signal thinks that i don't have play services and so it falls back to… polling. every five minutes. killing my battery and my logs.

i had to put signal into the restricted battery group, which means no notifications. i anxiously await the new molly, as i already have a unified push environment. it looks like the migration will be a bit delicate.

imo magic earth is a navigation app, full stop. it does that amazingly well, including live traffic, but i wouldn't use it for anything else. organic maps is a better general-purpose map but isn't a patch on magic earth for nav.

looks great! the catch for me is that my current host doesn't have docker support. your dependencies don't look crazy so in theory i could burst it and install directly to the host environment, but at that point i'm giving myself grocy-level headaches.

reading about docker-capable hosts, i was surprised to see them starting at 1GB RAM - i couldn't run pac-man in that. what would be a reasonable expectation for kitchenowl?

Prerequisites

  • Internet-facing web server with reverse proxy and domain name (preferably SSL of course)
  • Server behind the reverse proxy with Rust environment

Installation

  • Don't bother downloading the source code to your server; installing it that way gives you a big debug executable
  • Instead just cargo install mollysocket
  • Move the mollysocket executable if desired
  • Run mollysocket once so that it will emit the default config

Configuration

  • Fish the config file out of .config/mollysocket/default-config.toml and copy it somewhere.

config.toml

  • In the new file, replace the allowed_endpoints line with allowed_endpoints = ['*']. The default 0.0.0.0 config appears to be a bug; this setting controls access to endpoints within the app, not IPs from outside. Leaving the original value causes mollysocket to reject everything.
  • Put a proper path in the db = './mollysocket.db' line rather than just having it land wherever you're sitting.
  • Delete the mollysocket.db that was created on first run (even if it's already where you're intending to put it). This is just to make sure the web server creates it and has the correct permissions.

Run script

  • The environment variable ROCKET_PORT must be set or the server will sit and do nothing. It's best to create all of the environment variables mentioned in the README, whether that is in a user profile script or in a shell script that wraps startup. You can change any of these values, but they must exist.
  • export ROCKET_PORT=8020
    export RUST_LOG=info
    export MOLLY_CONF=/path/to/your/config.toml
    

Proxy server

  • You'll need to proxy everything from / to your mollysocket server and ROCKET_PORT.
  • Exclude anything that you may need served from your web server, such as .well-known.

Things to know

to a techie, i'd say, it's open source and if they ever overpushed politics, they'd find they'd become the fork as the community would fork away.

to a non-techie, i'd say, everyone's an asshole a different way, but they don't own the whole place like spez and musk do.

and i wouldn't argue. let them walk away haters. this platform isn't ready for everyone to come right now anyway.

this is true. having said that - i follow a peertube-based french outfit called blast (can't speak french, just look at the pictures). if i go to a different site (peertube.stream, liberta.vip) and look at a video, the streams are coming off video.blast-info.fr.

there's no question video is a huge resource suck, and that nobody would want to host a lot of other people's videos. i just wonder, if the model is federated indexes but owner-hosted video, i wonder if there's a use case that can work at scale.

appimages just got less easy…

i don't know which update did it - i think it must have been os-level (i run pop_os, derived from ubuntu) - but appimages silently stopped working. double-click, nothing. finally i looked in the log out of desparation, which said 'appimages require fuse'.

more accurately, appimages require fuse 2 and the os had just upgraded to fuse 3. the fix is to heat-seek libfuse2, and don't mess with any other fuse-related package as things can start wrecking themselves:

sudo apt install libfuse2

originally seen on an omgubuntu post

i made the same migration from markor (files in a folder) to logseq. there's a lot to be gained - always-preview alone is a game changer - but on mobile the visibility of the keyboard can be fiddly. once in a while you'll feel like you're in vi, it has such a mind of its own. but i'm not planning to go back

well i feel stupid now for not doing the obvious. but…

Blocked Page

Your organization has blocked access to this page or website.

on the PPA box, this is what it showed me (meanwhile it was attempting to connect to incoming.telemetry.mozilla.org). another symptom of displaying respect for enterprise policies but in fact ignoring them. (as i had mentioned, on this box all of the settings look locked down as they should be, but it's still attempting to send telemetry.)

so per wikipedia and confirmed at MDN, firefox is the only major browser line not to consider certificate transparency at all. and yet it's the only one that has given me occasional maddening SSL errors that have blocked site access (not always little sites, it's happened with amazon).

i don't understand how firefox can be simultaneously the least picky about certificates and the most likely to spuriously decide they're invalid.

part of humans learning to drive safely is knowing that flouting traffic laws increases your chance of being stopped, fined, or if you're not the right demographic, worse things. we calibrate our behavior to maximize speed and minimize cops, and to avoid being at-fault in an accident, which is a major hit to insurance rates.

autonomous vehicles can't be cited for moving violations. they're learning to maximize speed without the governor of traffic laws. in the absence of speed and citation data, it's hard to measure how safe they are. there is no systemic incentive for them to care about safety, except for bad press.

thanks to this post, i'm trying out searxng and then kagi, neither of which i knew. hopefully there's a searx instance configured roughly to how i'd want. i'm not philosophically opposed to paying, but search is a delicate thing to be personally identifiable - and i don't care what your privacy policy is, if you're taking my money, you can connect me with my clicks

i haven't tried the docker route - it seems fairly new. it also doesn't seem like it would fix the issues i ran into. containerization is great for insulating the app from external dependency hell and environmental variation. but the problems i've had involve its own code and logic, and corruption of a sqlite database within its own filesystem; wrapping issues like that in a docker container only makes them harder to solve