robinm

@robinm@programming.dev
0 Post – 57 Comments
Joined 11 months ago

Moving to git is nice but I don't understand why they don't self-host a gitlab instance.

39 more...

I am always doubtful when people say that accessing information inside git is hard. I totally agree that defaults in git can be improved (and they are, git restore and git switch are a much better alternative to git checkout that I no longer use). So let’s review the section “A Few Reasons Why SQLite Does Not Use Git”:

“Git does not provide good situational awareness”

git log --graph --oneline --author-date-order --since=1week

Make it an alias if you use it often. Alias is what helps you create your own good default (until everyone uses the same alias and in that case it should be part of the base set of commands).

“Git makes it difficult to find successors (descendants) of a check-in”

git log --graph --oneline --all --ancestry-path ${commit}~..

Likewise you could consider making it an alias if you use it often. Aliases can also be used as a post-it to help you remember what are the command that you find useful but you only use once in a blue moon!

The mental model for Git is needlessly complex

I may agree about that one. For reference, this is what the article says:

A user of Git needs to keep all of the following in mind: The working directory The "index" or staging area The local head The local copy of the remote head The actual remote head

If git fetch was run automatically every so often, as well as git push (of course in a personal branch), then this model could be simplified as

  • the working directory
  • the “index” or staging area (I actually think that being able to have more than one for drafting multiples commit at once, like a fix and a feature at the same time would be better than only having a single index)
  • your working copy of the shared branch
  • the shared branch

And integrating your changes (merging/rebasing) should probably be exclusively done using a PR-like mechanism.

Git does not track historical branch names

I’m skeptical about the usefulness of this. But since git was my first real vcs (10 years ago), it may just be that I have not used a workflow that took advantaged of persistant branches. I assume that git annotate could be a solution here.

Git requires more administrative support

most developers use a third-party service such as GitHub or GitLab, and thus introduce additional dependencies.

That’s absolutely true but I’m not sure it’s a real issue. Given how many strategies there are for CI/CD (and none is the definitive winner yet) I do think that being able to select the right option for you/your team/your org is probably a good idea.

Git provides a poor user experience

https://xkcd.com/1597/

I highly disagree about that xkcd comics. Git is compatible will all workflows so you have to use a subset of all the commands. Of course you will have more commands that you never use if a software is usable for all the workflow that you don’t use. But you need about 15 commands to do stuff, 30 to be fluent, and some more to be able to help anyone. Compared to any other complex software that I use I really don’t think that it’s an unreasonably high count. That being said I totally agree that git from 10+ years ago was more complex and we should correctly teach what is needed to junior. HTML/css/js is a nightmare of complexity but it doesn’t stop 15 years old kid with no mentoring to build cool stuff because you don’t need to know everything to be able to do most of the things you may think of, just a good minimal set of tools. And people should definitively take the time to learn git, and stop using outdated guide. Anything that don’t use git switch, git restore and git rebase --interactive and presents you have to inspect the history in length (git log --graph or any graphical interface that show the history in a graph, git show, and more generally than you can filter the history in any way you want, being by author, date, folder, file type, …) is definitively not a good guide.


To sum-up, I think that from this presentation fossil seems more opinionated than git which means that it will be simpler as long as your workflow exactly matches the expected workflow whereas using git requires to curate its list of commands to select only the one useful for yours.

8 more...

Read your own code that you wrote a month ago. For every wtf moment, try to rewrite it in a clearer way. With time you will internalize what is or is not a good idea. Usually this means naming your constants, moving code inside function to have a friendly name that explain what this code does, or moving code out of a function because the abstraction you choose was not a good one. Since you have 10 years of experience it's highly possible that you already do that, so just continue :)

If you are motivated I would advice to take a look to Rust. The goal is not really to be able to use it (even if it's nice to be able able to write fast code to speed up your python), but the Rust compiler is like a very exigeant teacher that will not forgive any mistakes while explaining why it's not a good idea to do that and what you should do instead. The quality of the errors are crutial, this is what will help you to undertand and improve over time. So consider Rust as an exercice to become a better python programmer. So whatever you try to do in Rust, try to understand how it applies to python. There are many tutorials online. The official book is a good start. And in general learning new languages with a very different paradigm is the best way to improve since it will help you to see stuff from a new angle.

As a rough estimation, if you include everything (apperance, discussion, functionality, interaction with other controls, …) I would say that every single input field or button is about a day of work. And then you start to realise how many buttons there is in any GUI and how much it will cost.

1 more...

Syntax has never really be an issue. The closest thing to plain english programming are legal documents and contracts. As you can see they are horrible to understand but that the only way to correctly specify exactly what you want. And code is much better at it. Another datapoint are visual languages like lego mindstorm or LabView. It's quite easy to do basic things, but it doesn't scale at all.

1 more...

That's well written. I think that requiered 2+ code review could also help because with time more people will gain knowledge of the dark parts of the codebase, just by reviewing the PR of “Martin” when he work on them.

2 more...

I would have liked a link to the LKLM thread. Usually they are quite informative

If your hierarchy is trying to destroy the product you create, just leave. You are not the main stackholder, and do not get benefits from the well-being of your product. The only things that should be importants as and an employee are “is my job interesting” and “are the work conditions great”. If you have to fight your management, they have already lost you because they just broke your trust, as well as the second point.

I never understood why python won agaist ruby. I find ruby an even better executable pseudo code language than python.

Good advice, clear, simple and to the point.

Stated otherwise: "whenever you need to add comments to an expression, try to use named intermediate variables, method or free function".

2 more...

If you try to learn git one command at a time on the fly, git is HARD. If you take the time to understand its internal data structure it's much, much easier to learn. Unfortunalely most people try to do the former because it works well (or better) for most tasks.

I can't recommand enough the git parable.

2 more...

Usually when people say “I suck at maths”, it means that they are bad at doing manual calculus. Maths is extremely useful in programming, but it’s absolutely not the same kind of math. I don’t think that the grade you had in math at school will influence in any if you will be good or bad in programming.

There take on what they call capabitilites is very interesting. Basically anything that would make a function non-pure seems to be declared explicitely.

A computational effect or an "effectful" computation is one which relies on or changes elements that are outside of its immediate environment. Some examples of effectful actions that a function might take are:

  • writing to a database
  • throwing an exception
  • making a network call
  • getting a random number
  • altering a global variable

I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don't understand why more recent version have not introduce two new characters that looks exactly the same but who don't require locale-dependant knowlege to do something as basic as “to lowercase”.

vim can have IDE-like capabilities thanks to lsp and tree-sitter. That's a real game changer and is quite easy to set-up with something like kickstart.nvim.

IIRC the orbit of Mercure doesn't work with Newton Model, and astronomers were predicted the discovery of Vulcain a small planet between Mercure and the Sun. So a new model had to be invented since Vulcain couldn't be found.

And you should not forget that Emacs is way harder when you are 4 because your hands are smaller!

Couldn't this be solved by having push_back being an inline function (or at least the check on capacity being inlined and the rest of the non-trivial part being in a sub non-inline function)?

5 more...

What do you mean by that?

I’m not familiar with C tooling, but I have done multiple projects in C++ (in a professionnel environnement) and AFAIK the tooling is the same. Tooling to C++ is a nightmare, and that’s and understatement. Most of the difficulty is self inflicted like not using cmake/meson but a custom build system, relying on system library instead of using Conan or vcpkg, not using smart-pointers,… but adding basically anything (LSP, code coverage, a new dependency, clang-format, clang-tidy, …) is horrible in those environments. And if you compare the quality of those tools to the one of other language, they are not even close. For exemple the lint given by clang-tidy to the one of Rust clippy.

If it took no more than an hour to add any of those tools to a legacy C project, then yes it would be disingenuous to not compare C + tooling with Rust, but unfortunately it’s not.

step 1: learn to comment everything. This will helps code reviewer to catch errors because your code doesn’t match the comments

step 2: write your code in a way that makes comments useless and stop writting them

step 3: write your code just like you did in step 2, but documents all the things that you didn’t do, or why the code is more complicated than the naive approach. If your arguments are weak you are not in step 3, but in step 1.

That's an interesting idea, but as someone else pointed using a voice modulator would be much better. Technical skills are importants, but human behaviors too. I would not trade a nice average coworker for someone who is better technically but doesn't know how to communicate. And typing is complementary, not a replacement for voice communication since the amont of information you can share in a minute is 3-5 times higher by voice.

I use a 42 key layout modified from bépo (french dvorak inspired layout) with the altgr layer of ergol. Go check this altgr layer it's awesome for programming, and there is a version compatible for qwerty and lafayette.

╭╌╌╌╌╌┰─────┬─────┬─────┬─────┬─────┰─────┬─────┬─────┬─────┬─────┰╌╌╌╌╌┬╌╌╌╌╌╮
┆     ┃   ¹ │   ² │   ³ │   ⁴ │   ⁵ ┃   ⁶ │   ⁷ │   ⁸ │   ⁹ │   ⁰ ┃     ┆     ┆
┆     ┃   ₁ │   ₂ │   ₃ │   ₄ │   ₅ ┃   ₆ │   ₇ │   ₈ │   ₉ │   ₀ ┃     ┆     ┆
╰╌╌╌╌╌╂─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┼╌╌╌╌╌┤
·     ┃     │   ≤ │   ≥ │  *¤ │   ‰ ┃  *^ │     │   × │  *´ │  *` ┃     ┆     ┆
·     ┃   @ │   < │   > │   $ │   % ┃   ^ │   & │   * │   ' │   ` ┃     ┆     ┆
·     ┠─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┼╌╌╌╌╌┤
·     ┃     │   ⁽ │   ⁾ │     │   ≠ ┃  */ │   ± │   — │   ÷ │  *¨ ┃     ┆     ┆
·     ┃   { │   ( │   ) │   } │   = ┃   \ │   + │   - │   / │   " ┃     ┆     ┆
╭╌╌╌╌╌╂─────┼─────┼─────┼─────┼─────╂─────┼─────┼─────┼─────┼─────╂╌╌╌╌╌┴╌╌╌╌╌╯
┆     ┃  *~ │     │     │   – │     ┃   ¦ │   ¬ │  *¸ │     │     ┃           ·
┆     ┃   ~ │   [ │   ] │   _ │   # ┃   | │   ! │   ; │   : │   ? ┃           ·
╰╌╌╌╌╌┸─────┴─────┴─────┴─────┴─────┸─────┴─────┴─────┴─────┴─────┚ · · · · · ·

Shared libraries save RAM.

Citation needed :) I was surprised but I read (sorry I can't find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.

2 more...

Lol. I read “Other oysters gaining more popularity”, and found it very appropriate !

With Bram Moolenaar death, I sincerely think that vim will no longer be able to play catch-up with nvim. Bram Moolenaar did an amazing job with nvim, but with its death I think that vim is going to be an editor of the past, just like vi is an editor of the past. And nvim is its successor since its where the developers have moved.

You seems to have a severe issue so I'm not sure what I'm going to say may help.

Learning something and then forgeting it is absolutely normal. Repetition over and exponentially long time and sleep in between helps a lot. Some people use flashcards to helps with memorisation. The idea is simple, when you learn something you write question + answers on a piece on paper (usually bristol for easy manipulation) and put it in a box. This box has multiple compartment: every day, every second day, once a week, once every second week, once every second month for example. When you add a card you add it to the “every day” compartment. Then each day you open all the compartment of the current day and ask yourself all the questions. If you correctly remember the answer you put it in the next compartment, and if you don't you put it back to the “every day” one.

Another way to helps you understand and rembembering things is to explain them to others. If you don't have someone to explain what you just learn you can create youtube video (even if noone will watch them but you do as if you had an audience). As bonus you now have a video that explains using your language something you just learn if you ever forget it!

Someone found the link to the article I was thinking about.

So cute!

Nothing prevent you to use dynamic linking when developping and static linking with aggressive LTO for public release.

Same in France

Cardboards are actually quite good at heat insulation. If you have an electric oven (no flame) and put the temperature below 200°C (ignition is at a slighly higher temperature but oven aren't precise), there is no risk. So you can totally reheat pizza at 180°C on its cardboard.

It's especially true when you want to parse some json/xml/whatever. Just describe your datastuctures with regular struct and enum, add serde and done! It's like magic!

I absolutely agree that method extraction can be abused. One should not forget that locality is important. Functionnal idioms do help to minimise the layer of intermediate functions. Lamda/closure helps too by having the function much closer to its use site. And local variables can sometime be a better choice than having a function that return just an expression.

This new OKLCH color space looks really nice to use. It's surprising that it's really human readable, I wouldn't have guessed that you could do it for random colors.

I'm a bit surprised. Why does OKLAB gradiant looks better than OKLCH?

I need to re-try it. I really like like lsp/dsp are first class cityzen, including the keybindings, and that there is better text objects than in vanilla neovim. Last time I tried it there was a few things that where not that easy to set-up (I forget what), but I should definitively take the time to learn it.

I just wish that neovim/kakoune/helix had a marketplace just like vscode. It make the discovery and installation so much easier when everyone use the same tools.

That's true. But at least you will have evidence that Martin doesn't conform to the team rules.

I would even have said that both throwing and catching should be pure, just like returning an error value/handling should be pure, but the reason for the throw/returning error itself is impure. Like if you throw and ioerror it's only after doing the impure io call, and the rest of the error reporting/handling itself can be pure.

1 more...

Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don't works throught FFI.

I think you don't understand what @CasualTee said. Of course dynamic linking works, but only when properly used. And in practice dynamic linking in a few order of magnitude more complex to use than static linking. Of course you still have ABI issue when you statically link pre-compiled libraries but in practice in statically linked workflow you are usually building the library yourself removing all ABI issues. Of course if a library is using a global and you statically linked it two times (with 2 differents versions) you will have an issue, but at least you can easily check that a single version is linked.

There are no problems other than versioning and version conflicts, and even that is a solved problem.

If it was solved, “DLL hell” wouldn't be a common expression and docker would have never been invented.

You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

@CasualTree was talking specically of UB related to dynamic linking and whitch simply do not exists when statically linking.

Yes dynamic linking work in theory, but in practice it's hell to make it work properly. And what advantage does it have compare to static linking?

  • Less RAM usage? That not even guaranteel because static linking allow aggressive inlining, constant propagation, LTO and other fun optimisation
  • Easier dependencies upgrade? That's mostly true for C, assuming you have perfect backward ABI compatibility. And nothing proves you that your binary is really compatible with newer versions of its libraries. And staticdependencies ungrade are an issue only because most Linux distribution don't have a workflow in witch updating a dependancy triggers the rebuil of all dependant binaries. If it was done it would then just be a question of download speed. Given the popularity of tools like docker who effectively tranforms dynamic linking into the equivalent of statically linking since all dependencies' versions are known, I would say that a lot of people prefer the confort of static linking.

To sum-up, are all the complications introduced specifically introduced by dynamic linking compared to static linking worth it for a non-guaranteed gain in RAM, a change in the tools of Linux maintainors and extra download time?