bellsDoSing

@bellsDoSing@lemm.ee
0 Post – 55 Comments
Joined 1 years ago

I'm in the same boat as you. But considering there's this thing called the "ad industry", there's bound to be a considerable portion of people that are influenced enough by ads, even just at a subconcious level, that investing money into ads is a worthwhile thing to do for businesses selling products and businesses offering ad platforms.

Depends on the specific plugin. I've been doing music production on Linux for several years now. Back then things looked a lot worse than now. Most popular bridge solution for Windows plugins on Linux is yabridge atm. The README is well worth a closer read, cause it will answer many questions on how to get even more modern plugins to display correctly (i.e. JUCE based ones).

The past doesn't necessarily dictate the future. If the people in charge of SUSE's direction going forward think way differently than the one's back in regards to your comment, then the outcome can be different / better for the Linux community, can't it?

1 more...

And it will find you the most answers online in case you have a git related question.

On top of that, 20 kHz is quite the theoretical upper limit.

Most people, be it due to aging (affects all of us) or due to behaviour (some way more than others), can't hear that far up anyway. Most people would be suprised how high up even e.g. 17 kHz is. Sounds a lot closer to very high pitched "hissing" or "shimmer", not something that's considered "tonal".

So yeah, saying "oh no, let me have my precious 30 kHz" really is questionable.

At least when it comes to listening to finished music files. The validity of higher sampling frequencies during various stages in the audio production process is a different, way less questionable topic,

Coincidentally, I happen to have been reading into SEO more in depth this week. Specifically official SEO docs by google:

https://developers.google.com/search/docs/fundamentals/seo-starter-guide

To be clear, SEO isn't about tricking search engines per se. First and foremost it's about optimizing a given website so that the crawling and indexing of the website's content is working well.

It's just that various websites have tried various "tricks" over time to mislead the crawling, indexing and ultimately the search engine ranking, just so their website comes up higher and more often than it should based on its content's quality and relevancy.

Tricks like:

  • keyword stuffing
  • hidden content just visible to crawlers
  • ...

Those docs linked above (that link is just part of much more docs) even mention many of those "tricks" and explicitely advise against them, as it will cause websites to be penalized in their ranking.

Well, at least that's what the docs say. In the end it's an "arms race" between search engines and trickery using websites.

Have a ZOWIE EC2 for quite a while now:

  • gaming mouse, 5 buttons
  • USB compliant
  • no special vendor drivers needed to use all mouse features (has buttons on bottom side for settings)

Works well on all OS.

Just looked it up a bit: https://microsoft.github.io/monaco-editor/

AFAIU, monaco is just about the editor part. So if an electron application doesn't need an editor, this won't really help to improve performance.

Having gone through learning and developing with electron myself, this (and the referenced links) was a very helpful resource: https://www.electronjs.org/docs/latest/tutorial/performance

In essence: "measure, measure, measure".

Then optimize what actually needs optimizing. There's no easy, generic answer on how to get a given electron app to "appear performant". I say "appear", because even vscode leverages various strategies to appear more performant than it might actually be in certain scenarios. I'm not saying this to bash vscode, but because techniques like "lazy loading" are simply a tool in the toolbox called "performance tuning".

BTW: Not even using C++ will guarantee a performant application in the end, if the application topic itself is complex enough (e.g. video editors, DAWs, etc.) and one doesn't pay attention to performance during development.

All it takes is to let a bunch of somewhat CPU intensive procedures pile up in an application and at some point it will feel sluggish in certain scenarios. Only way out of that is to measure where the actual bottlenecks are and then think about how one could get away with doing less (or doing less while a bunch of other things are going on and then do it when there's more of an "idle" time), then make resp. changes to the codebase.

Git for projects

I assume the original comment meant code based projects, for which git, if repo is pushed to a remote, is a very sane choice.

1 more...

Yeah, especially when considering that placebo and (in this case) nocebo effects are a real thing.

What do people think would happen when being told they will be very likely diagnosed with an incurable disease in 5 years from now? Do they think their levels of stress, anxiety, negative thinking etc. will stay as if they'd never heard of that information? No, likely not. Therefore their health will potentially be affected negatively just by knowing of that information.

But the important part here is the "incurable"! Reason being that if there's any chance one can prolong good health for longer by acting in a preventive, health supporting way couple years sooner, then yes, it likely would be better to know earlier and change something about it even if it's likely to affect one at some point.

And what makes it even trickier is that nobody really knows what future medical advances will be like. What's called inevitable and incurable now, might, with early treatment, actually no longer be in the not too distant future.

That's one of the reasons why the more modern fd is a nice alternative: it accepts command line args as you'd expect.

Honestly, if all you've ever experienced in regards to terminals is windows CMD, then you really haven't seen much. I mean that possitively. Actually, it will give you a far worse impression on what using a Linux / Unix terminal can be like (speaking as someone who spent what feel's like years in terminals, of which the least amount in windows CMD).

I suggest to simply play around with a Linux terminal (e.g. install VirtualBox,.then use it to install e.g. Ubuntu, then follow some simple random "Linux terminal beginner tutorial" you can find online).

Oh, it's still going!

35 more...

AFAIK, if you want disk encryption on Arch, you gotta set it up yourself (i.e. follow the wiki).

And last time I installed manjaro (couple years ago), the installer would let you decide whether you want disk encryption or not. So nobody is being forced to use it.

Then again, if you are tired of it, there likely is a way to effectively disable it for your current install. But most likely that will be quite a bit more involved that just unchecking it during install.

1 more...

Not the same as "on demand zooming", which let's one stick with a high, native resolution, but zoom in when required (e.g. websites with small text that can't be zoomed via browser's font size increase; e.g. referencing some UI stuff during UI design, without having to take a screenshot and pasting + zooming it in e.g. GIMP).

1 more...

Somewhat recently I caused a failed kernel update by accident:

Ran system update in tmux session (local session on desktop). But problem was that tmux itself got also updated, which crashed the tmux session and as a result crashed the kernel update. Only realized it upon the following reboot (which no longer worked).

Your described solution re "live ISO, chroot, run system update once more, reboot" was also what got me out of that situation. So certainly something worth learning for "general troubleshooting" purposes re system updates.

I've been using Manjaro (XFCE edition) as my daily driver, both on a laptop and a desktop system for more than 6 years now. I've tried many others beforehand: Ubuntu and its variations, Arch, Fedora, Tumbleweed, ...

But Manjaro was what made me stop hopping around. While it's true that it has some pitfalls (e.g. cert issues, AUR incompatibility at times), to this day it's working well enough for me that I don't feel like switching away.

I'm not just browsing web on it either. Software engineering, music production, image and video processing, etc.

Then again, I don't consider myself a beginner at this point and can troubleshoot a fair amount of issues now that I simply couldn't when I started using Linux more than a decade ago.

I also try to:

  • not overdo the amount of AUR stuff I use
  • read the official forum post BEFORE whenever I run a system update

I also always appreciated the fact that I could get away with not doing a system update for like six weeks and then do a big one (as mentioned, in combination with reading their update announcement). That's always something that didn't quite work for me on Arch in the past (then again, I still was a beginner back then, so most "reinstall to solve this problem" situations back then were on me).

What if Manjaro really would get worse enough so I'd want to switch? I guess EndeavourOS would be an option, because it's very close to Arch, but at the same time, it seemingly offers a graphical installer that hopefully will set itself up properly on a laptop. Then again, I haven't installed Arch in quite a while now. Maybe the install experience has gotten much nicer.

But if not for using submodules, how can one share code between (mono-)repos, which rely on the same common "module" / library / etc.? Is it a matter of "not letting submodules usage get out of hand", sticking to an "upper limit of submodules", or are submodules to be avoided entirely for monorepos of a certain scale and there's a better option?

3 more...

Nah, one is enough. ^^ Curiosity got the better of me thinking about how squished the UI might end up looking.

44 more...

Alright, looks like 40% filled up on my screen atm.

33 more...

To add, edge functions (powered by deno) are one of the bigger pain points of supabase. At least that's my own practical experience and the experience of quite a few others on their github (discussions and issues).

In my current project, I started of optimistically ("Should be doable, they say you feel right at home coming from nodejs!"), tried rewriting some existing nodejs code and use edge functions just like your average nodejs powered serverless functions.

But in the end, things just didn't work out:

  • deno's crypto module just wasn't up to scratch yet re nodejs compatibility (for my rather humble needs)
  • supabase uses --no-npm flag re its use of "deno deploy runtime", which means node: specifiers for imports aren't supported
  • the fact that unlike for serverless functions, which update their runtime only once you yourself trigger a new deployment (e.g. nodejs on vercel), "deno deploy runtime" is continously being updated to latest version, which to me still feels pretty strange for production use, considering how serverless functions handle runtime updates.

In the end I changed my architecture yet again, moved most of the code to an expressjs backend and only use edge functions as a kind of "tender" proxy layer with minimal dependencies (mostly just deno and some esm.sh imports; e.g. supabase-js).

Don't get me wrong, supabase overall is a great thing and they do many things well! I'm still using them going forward. But edge functions just have the potential for being such a pain point in a project and many have already wished for also having the option for "classic" serverless functions.

Have been using a Zowie FK2 for a couple years now and it's really nice. No drivers needed due to being USB class complient. Hardware toggle for DPI. Good build quality. If it would break tomorrow, I'd buy it again if available.

'Tasks' (open source android app) can do this. Likely you already knew (others likely don't). But I agree, a surprisingly uncommen feature.

20 mph (32 km/h) on a regular bike is doable, but yeah, usually that involves a very "flat" road or even a road that has a slight decline. And as you've said, maintaining it (e.g. for more than 10 seconds) is a whole different story.

Furthermore, it also requires a certain fitness level and "bodily involvement". The thing that still catches me off guard at times is how relaxed some people on ebikes look while going that fast. Whatever kind of judgement I could make in the past on how fast someone is approaching based on how much they "visually excert themselves" (e.g. hunching forward or even standing up) kind of has become meaningless with ebikes.

I additionally mapped that latter one to F2, because being able to repeatedly copy from VIM and paste into another application without having to move your hand between mouse and keyboard is nice.

Of course, that's VIM. If you meant "vim mode" in shell, then that's a different story.

Yeah, basically anything that rewrites already pushed history and is then (force-) push is bound to create problems (unless it's a solo dev only ever coding on a single device, who uses the remote repo as a mere backup solution).

Yeah, such a simple, but still killer feature. Really sad that JSON doesn't support them.

2 more...

Great read, certainly had more relatable things in there than I'd expected.

I hope we're all talking about portrait orientation. Oh boy, filling it up in landscape mode seems a daunting task. °!°

30 more...

I see, somehow completely forgot that apps might be different. In browser version in landscape (I just noticed) there's also the right sidebar, which reserves some space. So it wouldn't even have to go all the way.

28 more...

Nice, bit over half way point here.

22 more...

I do wonder if there's a hard limit at some point regarding "nested replies"...

19 more...

Yeah, AFAIR, the issue of "windows messing up grub" could happen when it's installed on the same disk (e.g. on a laptop with one disk). Something about it overwriting the "MBR sector". At least that was a problem back before UEFI.

I too have been dual booting Windows 10 and Linux for many years now, each having their own physical disk, Linux one always being first in boot order. Not once did a Windows 10 update mess up grub for me with this setup.

I went through setting up netdata for a sraging (in progression for a production) server not too long ago.

The netdata docs were quite clear on that fact that the default configuration is a "showcase configuration", not a "production ready configuration"!

It's really meant to show off all features to new users, who then can pick what they actually want. Great thing about disabling unimportant things is that one gets a lot more "history" for the same amount of storage need, cause there are simply less data points to track. Similar with adjusting the rate which it takes data points. For instance, going down from default 1s internal to 2s basically halfs the CPU requirement, even more so if one also disables the machine learning stuff.

The one thing I have to admit though is that "optimizing netdata configs" really isn't that quickly done. There's just a lot of stuff it provides, lots of docs reading to be done until one roughly gets a feel for configuring it (i.e. knowing what all could be disabled and how much of a difference it actually makes). Of course, there's always a potential need for optimizations later on when one sees the actual server load in prod.

I started using git-secret 2 years ago. It's nice for making secrets part of the repo, while not being readable by anyone that isn't explicitely allowed to do so (using GPG).

Regarding tauri: One and a half years ago I looked into it as a potential alternative to using electron.

Back then I had to decide against it for my use case, because when the goal is that it's a cross platform app, then one has to make sure that whatever "webview version" is used on all target OS, they all have to support the features one needs re one's own app codebase. Back then I needed some "offscreen canvas" feature that chromium supported (hence electron), but which webkit2gtk (used on Linux) didn't at the time.

https://tauri.app/v1/references/webview-versions/

So it's not always easy to give a clear recommendation on using tauri over electron. One really has to get somewhat clear on what kind of "webview requirements" the resp. app will have.

But I do hope this will (or maybe already is) less of an issue in upcoming years (things are moving fast after all).

So AFAIU, if a company had:

  • frontend
  • backend
  • desktop apps
  • mobile apps

... and all those apps would share some smaller, self developed libraries / components with the frontend and/or backend, then the "no submodules, but one big monorepo" approach would be to just put all those apps into that monorepo as well and simply reference whatever shared code there might be via relative paths, effectively tracking "latest", or maybe some distinct "stable version folders" (not sure if that's a thing).

Anyway, certainly never thought to go that far, because having an app that's "mostly independant" from a codebase perspective be in it's own repo seemed beneficial. But yeah, it seems to me this is a matter of scale and at some point the cost of not having everything in a monorepo would become too great.

Thanks!

1 more...

I see. 9th rainbow, here we go.

3 more...

Ha, for sure I missed the other comment...

11 more...

Oh boy... can't promise you that I will last that long. I know it sounds pathetic, but is replying to one's own comment an option (just for stress testing)?