Tom

@Tom@programming.dev
0 Post – 11 Comments
Joined 1 years ago

Bill is a liability.

2 more...

This is a very strange article to me.

Do some tasks run slower today than they did in the past? Sure. Are there some that run slower without a good reason? Sure.

But the whole article just kind of complains. It never acknowledges that many things are better than they used to be. It also just glosses over the complexities and tradeoffs people have to make in the real world.

Like this:

Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

I don't know what exactly is involved in Windows updates, but it's likely 1) a lot of data unpacking, 2) a lot of file patching, and 3) done in a way that hopefully won't bork your system if something goes wrong.

Sure, reinstalling is probably faster, but it's also simpler. If your doctor told you, "The cancer is likely curable. Here's the best regimen to get you there over the next year", it would be insane to say, "A YEAR!? I COULD MAKE A WHOLE NEW HUMAN IN A YEAR!" But I feel like the article is doing exactly that, over and over.

I've got so many more stories about bad optimizations. I guess I'll pick one of those.

There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,

  • Put all data in x table with batch y.
  • Update rows in batch y with condition a, set as type a. (just using letters as placeholders for real states)
  • Update rows in batch y that haven't been updated and have condition b, set as type b.
  • Update rows in batch y that haven't been updated and have condition c, set as type c.
  • Update rows in batch y that have condition b and c and condition d, set as type d.
  • (Repeat many, many times)

It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??

Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren't actually that complicated, we changed the architecture to:

  • Pull data row by row (instead of immediately into a database)
  • Hydrate the data into a model
  • Set up and work with the model based on the business rules we painstakingly reverse engineered (i.e. this row is type b because conditions x,y,z)
  • Insert models to database in batches

I don't remember the exact performance impact, but it wasn't markedly faster or slower than the previous "fast" SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.

A few words of caution: Don't assume that building things with a certain tech or architecture will absolutely be "too slow". Always favor building things in a way that can be understood. Jumping to the wrong tool "because it's fast" is a terrible idea.

Edit: fixed formatting on Sync

1 more...

Yep, absolutely.

In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.

Turns out it wasn't a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.

Nice video about it here : https://youtu.be/cZLed1krEEQ

Tldw: US DOS version actually has 2 separate impossible jumps on a level that aren't present on the European DOS or NES versions.

Project Panama is aimed at improving the integration with native code. Not sure when it will be "done", but changes are coming.

Null is terrible.

A lot of languages have it available as a valid return value for most things, implicitly. This also means you have to do extra checking or something like this will blow up with an exception:

// java example
// can throw exception
String address = person.getAddress().toUpperCase();

// safe
String address = "";
if (person.getAddress() != null) {
    person.getAddress().toUpperCase();
}

There are a ton of solutions out there. Many languages have added null-coalescing and null-conditional operators -- which are a shorthand for things like the above solutions. Some languages have removed the implicit nulls (like Kotlin), requiring them to be explicitly marked in their type. Some languages have a wrapper around nullable values, an Option type. Some languages remove null entirely from the language (I believe Rust falls into this, using an option type in place of).

Not having null isn't particularly common yet, and isn't something languages can just change due to breaking backwards compatibility. However, languages have been adding features over time to make nulls less painful, and most have some subset of the above as options to help.

I do think Option types are fantastic solutions, making you deal with the issue that a none/empty type can exist in a particular place. Java has had them for basically 10 years now (since Java 8).

// optional example

Class Person {
    private String address;
    
    //prefer this if a null could ever be returned
    public Optional getAddress() {
        return Optional.ofNullable(address);
    }
    
    // not this
    public String getAddress() {
        return address;
    }

When consuming, it makes you have to handle the null case, which you can do a variety of ways.

// set a default
String address = person.getAddress().orElse("default value");

// explicitly throw an exception instead of an implicit NullPointerException as before
String address = person.getAddress().orElseThrow(SomeException::new);

// use in a closure only if it exists
person.getAddress().ifPresent(addr -> logger.debug("Address {}", addr));

// first example, map to modify, and returning default if no value
String address = person.getAddress().map(String::toUpperCase).orElse("");

Wow, that looks really nice!

I use Lua for PICO-8 stuff and it works well enough, but certain parts are just needlessly clumsy to me.

Looks like TIC-80 supports wren. Might have to give that a try sometime!

Which they could have done a much better job with.

It was basically just hosted SVN if I remember right, and they never added git support when it became the de facto version control system.

Based on some places I used to work, upper management seemed convinced that the "idea" stage was the hardest and most important part of any project, and that the easy part is planning, gathering requirements, building, testing, changing, and maintaining custom business applications for needlessly complex and ever changing requirements.

I'm somewhat confused by your statements, so perhaps I don't understand.

Function/objects that allow changing their behavior by passing different objects into them, based on some interface, is called dependency injection. Some subset of behavior is determined by this passed behavior. E.g. To keep a logger class from having to understand how to write logs, you could create a WriteTo interface and various implementations like WriteToDatabase, WriteToFile, WriteToStdout, WriteToNull.

When you create this example logger, you'll need to make a choice of what object to pass when you write the code. e.g. new Logger(new WriteToDatabase(config)) But maybe you don't want to make that decision yet -- you want to let a config file decide which writer(s) to create. The pattern to pick between dependencies at runtime is called a factory. In this case, you might make a WriterFactory to pick the right writer, or perhaps a LoggerFactory to hide the creation of both Writer and Factory objects.

So, a factory is only really a facade to hide the runtime switching of how an object is created.

Also, the term dependency injection often gets confused with what you see in various Java / C# / and various frameworks in other languages -- those usually use what's called a "DI Container" or "IoC Container". These manage and facilitate how dependency injection happens within the project, often with various annotations (e.g.@Autowired). These containers are powerful, but sometimes complicated.

However, you can absolutely still do DI without DI containers, and I think advocating for not using DI generally (and related patterns like factories) is rather misguided.