Oliver Lowe

@Oliver Lowe@apubtest2.srcbeat.com
0 Post – 48 Comments
Joined 4 months ago

Closer look at human story: Ghana: A Week in a Toxic Waste Dump

One of the problems with using older devices for a long time, even if they are repaired, is that common ways people use their computers (I'm including smartphones, tablets, laptops etc. here) gets slower over time. See How web bloat impacts users with slow connections by danluu@mastodon.social

3 more...

As someone who never did much web development, I was... surprised... at the amount of tooling that existed to paper over this issue. The headaches which stood out for me were JavaScript bundling (then you need to choose which tool to use - WebPack but then that's slow so then you switch to esbuild) and minified code (but that's hard to debug so now you need source maps to re-reverse the situation).

Of course the same kind of work needs to be done when developing programs in other languages. But something about developing in JS felt so noisy. Imagine if to compile Java or Rust you needed to first choose and configure your own compiler, each presenting their own websites with fancy logos adorning persuasive marketing copy.

4 more...

I'm sure Microsoft has some good devs that are a net benefit to the open source projects they use, but this is not one of them.

Found the guy who created the FFMpeg ticket on LinkedIn. Job title: "Principal software engineer at Microsoft", saying they are "A detailed, analytical Software Engineer with Eighteen years of experience". 18 years?! Fuck me dead...

"A failure to plan on your part does not constitute an emergency on my part."

Wow now that is a quote I'm going to steal. Wondering if "A failure to understand on your part does not constitute an emergency on my part." has the same punch or is as relevant... anyway, thanks for sharing!

I'm developing some software to connect ActivityPub and email. It's been fun and I've learned a lot!

I'll upload a video in a bit!

1 more...

I'll upload a video in a bit!

As promised: https://www.olowe.co/tmp/apsubmit.mp4

Well... everyone back to memcached?

1 more...

But, since this particular set of data is so well-defined, and unlikely to change, roll your own is maybe not crazy.

I think that's the trick here. A relational database lets you do a whole bunch of complex operations on all sorts of data. That flexibility doesn't come for free - financially nor performance-wise! Given:

  • engineering chops
  • a firm idea of the type of data
  • a firm idea of the possible operations you may want to do with that data

then there's a whole range of different approaches to take. The "just use Postgresql" guideline makes sense for most CRUD web systems out there. And there are architecture astronauts who will push stuff because they can, not because they should.

Every now and then it's nice to think about what exactly is needed and be able to build that. That's engineering after all!

It attracts passionate and clever people, but as a result comes with a (rightful) reputation of being hard/expensive to hire for.

Worked at a Scala shop for a while. It was interesting as an outsider to see exactly that play out (I'm a diehard Unix hacker type, love Go etc.). There were some brilliant minds who really seemed to "get" the Scala thing. Then there were others who were more run-of-the-mill Java developers. Scala and the JVM makes all that, and everything in between, possible. With so many Java projects around, the Java devs would come and go depending on team/company factors like job cushiness, salary, or number of days in the office. But the more Scala-leaning people hung around. They made a huge impact on how projects were run.

The bosses would often talk with me about how hard it was to find those people. From a business perspective, they said it was absolutely worth the effort to find the Scala people despite operational overhead of the rotating door for the armies of Java devs.

The last two points, "Ask feedback from one person", and "Sleep on it" I think are great. Ironically the article ignores the rest of its own tips.

  1. Less is more

Circular logic. Q: "how do I write more efficiently?" A: "with less words" This "tip" could be omitted entirely.

  1. Start with the solution or the ask. [...] In ideal cases, the main message you want to convey is already in the title.

Better title: "8 Writing Efficiency Tips for Software Professionals", or maybe "8 Tips for Software Professionals to Communicate More Efficiently"?

  1. Show the facts, with examples

No text extracts provided in the article. For example a rewritten paragraph, or comparison of some summaries.

  1. Always quantify

Always! Except for this article which does not suggest a way to measure the efficiency of text e.g. number of words to convey the same message.

  1. Include links and references

None provided.

  1. Explain why it matters

Why would I want to write more efficiently? What real-world problems does efficient writing solve? Maybe I'm a software engineer new to the field and I don't know how pressed for time some managers are, or how people are drowning in verbose corporate junk words?

Maybe this article was LLM generated, like the cover graphic :(

Sorry to be a pain; I believe the correct term is "World Wide Wiki Wiki Wild Wild Web". But we usually just say "WWWWWWW" which is super short and easier to type.

Over the past approx. 100 days, dw_innovation@mastodon.social has made 2 or 3 public replies. The idea of using any of these social networks over RSS/Atom feeds and plain old websites is that they're social, not a place you upload text.

As a freedom loving hippie, I'd rather see broadcasters posting to the fediverse instead of whatever awful mish mash of Instagram, Facebook, Twitter et al. it is right now! That would be fantastic!

But as a technical purist, the way DW is using their account right now, they're arguably no worse off getting links/content from their RSS feeds available to the fediverse somehow (e.g. RSS Parrot). Sometimes I feel like we've had walled gardens for so long that we've forgotten about interoperability. Lots of platform thinking! Broadcasters don't need to be on the fediverse, just a way that their stuff can be shared to the fediverse.

I'm excited to see things changing that makes thinking like this even possible!

I found the documentation extremely lacking last time I looked at Swift 2 or 3 years ago. Any changes there? Or perhaps there's somewhere outside of Apple's own docs I should have been looking?

1 more...

and full of old bugs and legacy code.

The feeling of reading through those crazy JVM stack traces with classes named "hudson" from the Jenikins prototype... I shudder! Well done for pushing through it all!

Lemmy's maintainers seem overworked. As is the case with so much of software dev, (open source or otherwise!) non-programmers are unaware of or underestimate maintenance burden. From the outside, it looks like it's just about "adding a feature". But in reality, it's less about "adding" and more about "growing". Feature requests generally need to be evaluated with this in mind; whether future development is sustainable with some new feature(s).

I see opportunities here for some software dealing with either ActivityPub directly or with Lemmy's HTTP API.

Anyone used lemmy-modder? Thoughts?

$100 says his response boils down to "just don't write unsafe code"

Edit: It did. He also said C++ is working to implement optional safety features, but tl;dr:

Of the billions of lines of C++, few completely follow modern guidelines

Pretty sure this is a No true Scotsman moment. (I've always wanted to bring this fallacy up but I never knew when lol)

Pl4tyh4x0r

1 more...

Found the guy who created the FFMpeg ticket on LinkedIn. Job title: "Principal software engineer", saying they are "A detailed, analytical Software Engineer with Eighteen years of experience". 18 years?! Fuck me dead...

I often like reading his older pieces. I forget how long he's been at it for. Here's one making fun of Apple almost 10 years ago: https://www.theguardian.com/technology/2015/feb/13/if-dishwashers-were-iphones

Spreading awareness of using git how it's actually designed to be used is cool to see.

I'm not clear on how I'd send patches to a repository hosted with ayllu. For example with the repo ayllu, they list codeberg (to which you can't send patches) and sourcehut (which uses lists.sr.ht). So... is ayllu really "email-based"? Can anyone else see any other way to communicate via email?

1 more...

Ah that's an easy one - what would you like to do?

My first uh... "language" was bourne shell. Not because I thought it was a cool language, but because that's what let me do things I wanted to do at the time: automate heaps of Linux, BSD stuff.

There are heaps of libraries and applications where C++ is the choice e.g. video games. My friend is great at Javascript because he loves web browser tech.

Don't stress - have fun! :)

Getting stuff down in writing is a good step; crucial for sure. But the process/ritual of decision logs doesn't necessarily get you great analysis or effective outcomes. Two stories:

... or if your team uses arguments such as "let's use X because Microsoft uses it too", "let's do X because everyone else does", or "I used it in my previous project and it worked well, so let's use it here too" then your decisions are likely suboptimal in the long term.

I've read decision logs and design docs that have included this kind of reasoning. Many meetings, everyone gets their say, it's all written down. One time, I arrived a year or two into a project and I could indeed see how exactly they came to the decision that they did. The problem was that the reasoning was super weak. Over-emphasis on process, little on problem solving skills.

Other teams I've been on were fantastic problem solvers but super sloppy. If the right people were around, in the same room, they could solve things more cleanly in a fraction of the time of some company 10x the size. But for new staff, or if those key people were not around: chaos!

I guess my conclusion is that effective decision-making comes down to balancing a whole bunch of different behaviour.

In short: software is tricky.

1 more...

So-called "backend" I was OK with. HTTP is well-specified. It's a too general of a protocol what it's being used for, so you're stuck implementing the same stuff over and over again. When using SMTP or NNTP you realise how much work the protocol does for you when building systems on top of it.

But "frontend"... Jesus talk about abusing something that was never designed to be used like it is. Total nightmare in my opinion! UIs which are totally inconsistent in appearance and behaviour has somehow become the norm!

Did you see this article by Dan Luu? https://danluu.com/slow-device/

Super interesting. It's a discussion from a point of view I hadn't considered before: how bandwidth has increased much more than CPU performance of web apps. I felt this in a way as my main computer until recently was a mini PC with the an Intel i5-5250U processor. Despite my Internet connection going from a 10mbps link to a 300mbps link, and pings dropping from 25ms to <5ms, browsing the web on the device became unbearable.

None that I'm aware of, hoping others can chime in. There's an open ticket on this in the Mastodon tracker: https://github.com/mastodon/mastodon/issues/18601 Guess that's something to check every once in a while..

Keep us updated on any findings :)

I would use reportMissingData

Agreed, report feels clearer as the verb "record" is more about permanent storage and later reference.

Or even just reportMissing? Depending on what's happening around call sites, I often find I can drop generic stuff like "Data" and it's just as clear, especially when looking at a function signature. For instance:

func reportMissing(data) { ... }

Great question. Short answer: yes!

Long answer: I did this on a production system about 2 years ago.

The system was using MySQL, which was served from 3 virtual machines. Nobody took responsibility for that MySQL cluster, so outages and crazy long maintenance windows were normal especially as there was no DB admin expertise. The system had been hobbling along for 3 years regardless.

One day the company contracting me asked for help migrating some applications to a new disaster recovery (DR) datacentre. One-by-one I patched codebases to make them more portable; even needing to remove hard-coded IP addresses and paths provided by NFS mounts! Finally I got to the system which used the MySQL cluster. After some digging I discovered:

  1. The system was only ever configured to connect to one DB host
  2. There were no other apps connecting to the DB cluster
  3. It all ran on "classic" Docker Swarm (not even the last released version)

My ex-colleague who I got along really well with wrote 90% of the system. They used a SQL query builder and never used any DB engine-specific features. Thank you ex-colleague! I realised I could scrap this insane not-actually-highly-available architecture and use SQLite instead, all in a single virtual machine with 512MB memory and 1vCPU. SQLite was perfect for the job. The system consisted of a single reader and writer. The DB was only used for record-keeping of other long-running jobs.

Swapping it over took about 3 days, mostly testing. No more outages, no more working around shitty network administration, no more "how does the backup work again?" when DB dumps failed, no more complex config management to bring up, down DB clusters. The ability to migrate DB engines led to a significant simplification of the overall system.

But is this general advice? Not really. Programming with portability in mind is super important. But overly-generic programs can also be a pain to work with. Hibernate, JDBC et al. don't come for free; convenience comes at the cost of complexity. Honestly I'm a relational database noob (I'm more a SRE), so my approach is to try to understand the specific system/project and go from there. For example:

I recently saw a project that absolute didn't care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

Things I would want to learn more about:

  • were those performance optimisations essential?
  • if so, is the database the best place to optimise? e.g. smarter queries versus fronting with a dumb cache
  • are there database experts who can help out later? do we want to manage a cache?

Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, ... and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

Things I would want to learn more about:

  • how many stored procedures could we be managing? 1, 100, 1000? may not be worth taking on a dependency to avoid writing like 3 stored procedures
  • is that tooling depended on by other projects already?
  • how much would the vendor-specific datatype be used? one column in one table? everywhere?
  • does using vendor-specific features make the code easier to understand? or just easier to write? big difference!

My shitty conclusion: it depends.

I don't see much points in your page, when something like MDN is available,

Yeah this is a thing that started to irritate me when I was working on JS projects. So many articles written all describing stuff - not always correct - that is answered by reading standard documentation.

I just kept MDN, Node, React docs open all the time. But working with existing code I kept needing to deal with copy & pasted stuff from blog spam.

Right now I follow a few Mastodon users via an RSS-to-Email service, but the problem with that is that you can't follow private accounts/see followers-only toots. It would be great to have a full email bridge.

Ah yes know exactly what you mean. I follow Mastodon, PieFed, Lemmy stuff via RSS too.

I have a little program which follows/unfollows:

apfollow kevincox@lemmy.ml
apfollow -u kevincox@lemmy.ml

Then things get delivered to my inbox. That's been working ok. I'm adding a "Following" section to the docs soon.

But I think the main idea is getting Activity into a RFC5322 message in a filesystem. The system doesn't really care how that file is written. It could be from an ActivityPub server sending stuff to you. But it could also be from reading a RSS feed and fetching the items. My first stab at this was actually a couple of scripts which dumped my Mastodon timeline and some Lemmy stuff to message files.

So if my ActivityPub-email bridge was running you wouldn't also be able to access a Mastodon UI and for example browse other posts.

What I do now is clunky. First, I've written a couple of very basic frontends using both the Lemmmy & Mastodon API. These expose the unique ID of each post, which I copy/paste around...

(like commenting on a random post I was linked to).

I run this command:

apubget -m https://lemmy.ml/comment/9266238 > comment.eml

Then open the file in a mail client, and reply to it. Like I said: pretty clunky! :D

One thing I've thought about is hijacking the header's Subject field to hint to apas that we're replying to something. Modifying Subject is exposed in more mail clients than being able to modify arbitrary fields in the header (ideally we set In-Reply-To). For example for this message I'm writing now:

To: kevincox@lemmy.ml
Subject: https://lemmy.ml/comment/9266238

Ah yes know exactly what you mean bla bla bla...

Taking it further, frontends could render mailto: links. Here's one to reply to your message: mailto:kevincox@lemmy.ml?cc=fediverse@lemmy.world&amp;subject=https%3A%2F%2Flemmy.ml%2Fcomment%2F9266238

Using Subject as both the name or inReplyTo properties of an Activity depending on its value feels unclear.

Reading RFC 6068, it's theoretically possible that we could inject a In-Reply-To in a mailto URL. It's up to the mail application to interpret it. mailto:kevincox@lemmy.ml?cc=fediverse@lemmy.world&amp;in-reply-to=%3Chttps%3A%2F%2Flemmy.ml%2Fcomment%2F9266238%3E This encodes the message:

To: kevincox@lemmy.ml
CC: fediverse@lemmy.world
In-Reply-To: 

bla bla bla

Just tested and found that MailMate actually handles this. Still feels unclear... I dunno. What do you think?

I imagine part of the challenge going forward would be the hordes of programmers brought up on designing UIs using a DOM, and all the associated tooling.

My prediction is the situation could be similar to how today many text-only programs assumes a terminal-like device. Terminals have been obsolete for years but I personally feel it's a ball-and-chain on text UI development. The web document model could persist long after web browsers are a kind of "terminal" to load and render web documents.

Headline is a bit misleading. It's not about cadavers being ferried around the place, it's a policy change in how cadavers are distributed to schools.

Currently cadavers are donated to particular schools. The proposal is for some centralised gov. control over which schools they go to depending on shortages and demand. Seems fair enough...?

I love this easy candid style. It's not trying to push anything, for instance "8 years of meetups - how you can do the same!", or "8 years of meetups - why everything sucks".

It's a blog in that kinda old-school sense. A web log of some stuff that happened to somebody personally. So much stuff published as "blogs" are more like essays (ignoring all the shitty marketing blogs). I like essays too, but I also really like these life log things. Not sure how to find one over the other.

Thanks for sharing. I didn't expect to be inspired...!

When I drop off my electronics at "recycling" facilities, I always wonder if they don't just end up at a place like this. It's hard to tell if sending them to a local landfill wouldn't be less impactful on the environment.

Same. I'm in Australia so there is a lot of space. At the supermarket near me they have a dedicated battery recyclin g bin, so I guess I trust this a little more than those general recycling bins. That trust is even involved is not ideal though.

For now I just try hard to keep old stuff going for my friends and family. Software-wise they all use native apps for personal and work, so I see about 7-8 years of life for each laptop/desktop.

Logs highlight a problems in decision making process and let you analyze a problems in your team.

Yeah good point.

In that process-heavy project I joined, I could quickly see the problems; about 2 or 3 days. That meant when I was submitting code or reviewing the backlog I knew what kind of challenges I could make, and what would just be a waste of time. In others, it could take way longer - months! - to learn how the team actually deals with challenges and design.

Wow, and you only started programming in October?! Nice work!

Interesting format. Wonder what applications make use of it.

6 more...

I've updated the docs with new sections Receiving and Reading. See 2.3.1 etc. https://apubtest2.srcbeat.com/apas.html

Thanks for your comment as it helped me write down thoughts :)

Interesting, thanks for putting the time in! I love awk and its Unix roots.

This question might not make any sense but I'm curious! What are some of your favourite languages based on their EBNF grammar alone?

(they're somewhat unpredictable too - wait what? Why are you downvoting that?! 😂) I had nothing but positive reaction the the dotnet and MAUI communities.

Ha yes know exactly what you mean. The ActivityPub system I've written (from where you'll receive this reply) just drops any Like/Dislike activity altogether!

or is a specific company being singled out just because some low-level grunt filled in a field in a bug report?

FYI they're not a "low-level grunt". The bug author's job title is Principal Software Engineer at Microsoft with (at least) 18 years' experience.