jadero

@jadero@programming.dev
1 Post – 143 Comments
Joined 12 months ago

Fully retired now and one of the things I'd like to do is get back into hobby programming through the exploration of new and new-to-me programming languages. Who knows, I might even write something useful someday!

I learned that Android was not open under my personal definition of "open" right from the outset, because there was no programmatic access to telephony. My first project was to build an on-board answering machine with call screening capabilities.

I used an answering machine on my landline to avoid paying for caller id and voicemail and wanted to do the same with my cellphone. I was very disappointed to learn that this was not possible, at least with my skillset.

I knew that things were going the wrong way when my Tasker script to manage airplane mode stopped working when Android required locked it away. My use case there was that lack of connectivity at the gym and at home meant that connection attempts were draining my battery and heating up the phone. Now, of course, Android does a much better job of that particular task on its own, but it still makes me cranky. :)

Everything that has happened since has only cemented my opinion that Android is not actually an open platform. I do see many of the changes as potentially valuable security measures for the masses, but I wish that it wasn't quite so difficult for a power user to use the power of the little computer we carry in our pockets.

5 more...

I eventually learned to never trust any restrictions on the user.

I quickly learned to make sure everyone had a copy of decisions made, so that I could charge by the hour for changes. I eventually learned to include examples of what would and would not be possible in any specification or change order.

The left side (linear) looks like the code I write while I'm trying to figure out whether I understand the problem or I'm not quite sure what all I need to do prove that I can (or cannot!) solve the problem.

The code on the right, with all the "abstractions" looks like the code I end up with after I've got everything sorted out and want to make things easier to read, find, and maintain.

I actually find the code on the right easier to read. I treat it like a well written technical manual. For me, understanding starts with time spent on the table of contents just to get a sense of the domain. That is followed by reading the opening and closing sections of each chapter, and finally digging into the details. I stop at any level that meets my needs, including just picking and choosing based on the table of contents if there is no clear need to understand the entire domain in order to accomplish my goal.

I've preferred code written in that "abstracted" style since discovering the joy of GOSUB and JSR on my VIC-20.

1 more...

Old fart warning!

Presentation is left to the reader's client. Do you want dark mode? Get a markdown editor/reader that supports it. Do you want serif font? Again, that's client's choice and not part of the document.

I remember when that is how the web worked. All that markup was to define the structure of the document and the client rendered it as set by the user.

Some clients were better than others. My favourite was the default browser in OS/2 Warp, which allowed me to easily set the display characteristics of every tag. The end result was that every site looked (approximately) the same, which made browsing so much nicer, in my opinion.

Then someone decided that website creation should be part of the desktop publishing class (at least at the school I taught at). The world (wide web) has never recovered.

5 more...

In that spirit, I will call attention to your first sentence, specifically the comma. In my opinion, that can be improved. One of three other constructions would be more appropriate:

  • I am really happy when people are quite strict in code reviews. It makes me feel safer and I get to learn more.
  • I am really happy when people are quite strict in code reviews, because it makes me feel safer and I get to learn more.
  • I am really happy when people are quite strict in code reviews; it makes me feel safer and I get to learn more.

The first of my suggested changes is favoured by those who follow the school of thought that argues that written sentences should be kept short and uncomplicated to make processing easier for those less fluent. To me, it sounds choppy or that you've omitted someone asking "Why?" after the first sentence.

Personally, I prefer the middle one, because it is the full expression of a complete state of mind. You have a feeling and a reason for that feeling. There is a sense in which they are inseparable, so not splitting them up seems like a good idea. The "because" explicitly links the feeling and reason.

The semicolon construction was favoured by my grade school teachers in the 1960s, but, as with the first suggestion, it just feels choppy. I tend to overuse semicolons, so I try to go back and either replace them with periods or restructure the sentences to eliminate them. In this particular case, I think the semicolon is preferable to both comma and period, but still inferior to the "because" construction.

I've clearly spent too much time hashing stuff out in writers' groups. :)

3 more...

They are just the biggest asshole in the room.

So one day the different body parts were arguing over who should be in charge.

The eyes said they should be in charge, because they were the primary source of information about the world.

The stomach said it should be in charge because digestion was the source of energy.

The brain said it should be in charge because it was in charge of information processing and decision-making.

The rectum said nothing, just closed up shop.

Before long, the vision was blurry, the stomach was queasy, and the brain was foggy.

Assholes have been in charge ever since.

I have two hypotheses for why some kinds of software grow worse over time. They are not mutually exclusive and, in fact, may both be at work in some cases.

Software has transitioned from merely complex to chaotic. That is, there is so much going on within a piece of software and its interactions with other pieces of software, including the operating system itself, that the mathematics of chaos are often more applicable than logic. In a chaotic system, everything from seemingly trivial differences between two ostensibly identical chips to the order in which software is installed, updated, and executed has an effect on the operating environment, producing unpredictable outcomes. I started thinking about the systems I was using with this in mind sometime in the early 2000s.

The "masters" in the field are not paying enough attention to the "apprentices" and "journeymen. Put another way, there are too many programmers like me left unsupervised. I couldn't have had a successful career without tools like Visual Basic and Access, the masterful documentation and tutorials they came with, and the wisdom to make sure I was never in a position where my software might have more than a dozen users at a time at any one site. Now we have people who don't know enough to use one selection to limit the options for the next selection juggling different software and frameworks trying to work in teams to do the bidding of someone who can barely type. And the end result is supposed to be used by thousands of people on all manner of equipment and network connections.

One reason that open source software seems more reliable is that people like me, even if we think we can contribute, are mostly dissuaded by the very complexity of the process. The few of us who do navigate the system to make a contribution have our offerings carefully scrutinized before acceptance.

Another reason that open source software seems more reliable is that most of it is aimed at those with expertise or desiring expertise. At least in my experience, that cohort is much more tolerant of those things that more casual users find frustrating.

Good article that I think does a pretty good job of outlining the problems of "Computer time is less expensive than programmer time."

I was "raised" on the idea that end-user time is more valuable than programmer time and nobody really talked about computer time except in the case of unattended-by-design systems like batch processing. Even those were developed to save end-user time by, for example, preparing standard reports overnight so that they would be ready for use in the morning.

I think that one place we went off the rails was the discovery that one way to manage efficiency was by creating different classes of end-user: internal and external. Why would management care about efficiency when the cost of inefficiency is paid by someone else?

So much software is created explicitly for the purpose of getting someone else to do the work. That means the quicker you get something out there, the quicker you start benefiting, almost without regard to how bad the system is. And why bother improving it if it's not chasing customers away?

And yet more sites do it, even on desktop. As far as I can tell, most of them are also doing it in a way that breaks security by validating the username before asking for the password.

3 more...

There was a thread elsewhere asking whether a toggle should show current state or the state desired. There was enough disagreement that it quickly became apparent that, whatever else the toggle does, there should be something external to the toggle showing the possible states, indicating which way to move the toggle regardless of toggle appearance.

8 more...

Maybe I read things too literally, but I thought "Fahrenheit 451" was about a governing class controlling the masses by limiting which ideas, emotions, and information were available.

"Brave New World" struck me as also about controlling the masses through control of emotions, ideas, and information (and strict limits on social mobility).

It's been too long since I read "20,000 Leagues Under the Sea", but I thought of it as a celebration of human ingenuity, with maybe a tinge of warning about powerful tools and the responsibility to use them wisely.

I don't see a lot of altruistic behaviour from those introducing new technologies. Yes, there is definitely some, but most of it strikes me as "neutral" demand creation for profit or extractive and exploitive in nature.

There are a few things I've taken from that article on first reading:

  1. I was substantially correct in my understanding of how multidimensional matrices and neural networks are used. While unsurprising given the amount of reading I've done over the last several decades on various approaches to AI, it's still gratifying to feel that I actually learned something from all that reading.
  2. I saw nothing in there to argue against my thesis that things like ChatGPT may be doing for intelligence what evolutionary biology has done to creationism. In the case of evolution, it has forced creationists to fall back on a "God of the Gaps" whose gaps grow ever smaller. ChatGPT et al have me thinking that any attribution of mind or intelligence to "mystery" or the supernatural or whatever hand waving is en vogue is or will be consigned to ever smaller gaps. That is, it is incorrect to claim that intelligence, human or otherwise, is currently and will forever remain unexplainable.
  3. The fact that we cannot easily work out exactly how a particular input was transformed to a particular output strikes me as a "fake problem." That is, given the scale of operations, this difficulty of following a single throughline is no different from many other processes we have developed. Who can say which molecules go where in an oil refinery? We have only a process that is shown useful in the lab then scaled to beyond comprehension in industry. Except that it's not actually beyond comprehension, because everything we need to know is described by the process, validated at small scales, and producing statistically similar useful results at large scales. Asking questions about individual molecules is asking the wrong questions. So it is with LLM and transformers: the "how it works" is in being able to describe and validate the process, not in being able to track and understand individual changes between input and output at scale.
  4. Although not explicitly addressed, the "hallucinatory" results we occasionally see may have more in common with the ordinary cognitive failures we are all subject to than anything that can be labelled as broken. Each of us has in our backgrounds something that got misclassified in ways that, when combined with the way we process information, lead to wild conclusions. That is why we have learned to compare and contrast our results with the results of others and have even formalized that activity in science. So it may be necessary to apply that activity (compare and contrast) with other systems, including the ones built in to our brains.

Anyway some pseudorandom babbling that I hope is at least as useful as a hallucinating AI.

19 more...

I'd just like to take a moment to commend you on an absolutely exquisite answer.

Given time, there is a slim chance I could have covered the same ground and no chance that it would have been as clear and concise.

Does anything ever truly die?

https://ruffle.rs/

2 more...

I'm not sure, but I think they were making a joke. Germany created the Enigma machine. Turing et al did some seminal work as a result of the need to quickly decrypt Enigma messages. Ergo, we wouldn't have computers without the Germans.

I'm pretty sure non-programmers share much of the blame. Here's what I imagine goes through the minds of most people, especially management types.

"Oh, a nerd. Great we need another nerd in here because things are not moving fast enough."

I've had job offers for everything from equipment maintenance and repair (because there was a PLC hooked up) to network administrator. It's all computers, right?

When trying to use some of the truly atrocious stuff that gets rolled out with a web interface, I get the distinct impression that random "nerds" are dropped into random slots. There is no consideration that maybe saying "nerd" is like saying "doctor". If that's all you look for, you might get an economist instead of a surgeon.

3 more...

I actually think in some ways it's a good thing to overlook the technical side to a degree as well because technical skills are generally a lot easier to teach than the people skills. Assuming the fundamentals are there at least.

At one of my favourite places to work, the owner had a sticky note on the side of his monitor that read "Hire for attitude, train for skill, reward for excellence."

During one of our training sessions (I was teaching him Excel), I noticed that the sticky note was a different colour. I asked him about it and he said he rewrites it every Monday on a different colour so that it's always visible and always fresh in his mind because it's too easy to forget, even though he thought it was the secret to running a successful business.

I found that starting with a "critical path" analysis was very useful. Basically, identify which components or activities need to be at least somewhat functional in order to deal with a different component or activity.

This gives you a list of problems you have to solve in the order they need to be solved. That is, there is no point spending time on writing to disk until you've figured out what you're writing to disk and how your going to collect whatever needs to be written.

Virtually every critical path will have some easy stuff on it that needs to be in place before the hard stuff. That gets you (or at least me!) in the right frame of mind and builds momentum towards the goal. I've often found that doing the work leading up to the hard part made the hard part easier, at least partly because there is now a concrete problem ready to solve instead of a nebulous wondering how to approach it.

Note that if you try this approach, make sure that you avoid the common practice of building "placeholder" stubs for anything other than components further along the critical path than your current position. Even with that, I found little value in stubbing something in before I was actually getting close to the point of needing it in order to continue working.

Note that this process doesn't mean it's not appropriate to go ahead and identify where the dragons are and check to see if you have what it takes to slay them. Back to my example of writing to disk, if you've never written anything to disk before and it looks scary, it might be a good idea to create a completely unrelated toy project to learn and practice that specific skill. After all, if it turns out to be impossible to write to disk, there is no point starting a project that requires it. I find that these toy projects are simple in the sense that there is only one thing to worry about, so you're already doing the easiest thing available.

Well, if you can tolerate Google, they actually offer this. If I don't interact with my accounts for 3 months, it will send the email I've composed to designated recipients.

1 more...

Unless, of course, that programmer has any number of mobility issues that limit their use of the keyboard, in which case something like cursorless might be the only option.

I urge you to take a look at it. Some even claim that it's more productive than the keyboard. I don't know how the VSCode voice feature works, but if it makes integration of cursorless easier or better, then I'm all for it.

You've just described my 50 years in the workforce, jumping from job to job, only just barely anything resembling an actual career.

I wrote this elsewhere:

He brought me much joy in tinkering, first with Pascal, then with Oberon.

In looking up and then reading that article, I discovered that not only has Oberon been actively maintained, but that there is a successor, A2. Now that I'm back to being a hobbyist, I look forward to more joyful tinkering courtesy of his great mind.

Edit: in the course of further investigation, I found many dead links. But I also found this A2 repository that shows activity from as recently as 2 months ago.

I have seen some that seem to be doing that kind of thing, but many others that will reject a bad username before asking for a password.

To double check, I just now tried putting a known bad email address into the username field for amazon.ca and was not then asked for a password, but told that no account could be found.

My possibly flawed understanding of login security is that a failed login should reveal nothing about why the login failed in order to prevent information leakage that can be exploited.

1 more...

I think it is, but NSFW has quite a bit of metaphorical use, too. I've seen particularly beautiful examples of craftsmanship labeled jokingly (?) as NSFW to highlight the the difference between merely masterful work and artistry. That's one of the reasons I manage by subscription rather than by filter.

Even the word "porn" doesn't really work. There are various Porn groups on Reddit, like EarthPorn, which was dedicated to amazing examples of completely natural landscapes.

In the spirit of "-10x is dragging everyone else down" I offer my take on +10x:

It's not about personal productivity. It's about the collective productivity that comes from developing and implementing processes that take advantage of all levels of skill, from neophyte to master, in ways that foster the growth of others, both in skill and in their ability to mentor, guide, and foster the growth of others. The ultimate goal is the "creation" of more masters and "multipliers" while making room for those whose aptitudes, desires, and ambitions differ from your own.

That only makes sense. We are having a conversation, not creating literature.

I dealt with a similar situation by simply purchasing the code from my employer. I guess that technically it was a form of licensing, because we both had the right to use, modify, and resell as we saw fit as long as there were no infringements on branding or trademarks.

They may not offer favourable terms, but it might be worth asking.

I convinced my managers to move away from waterfall to a more iterative process using a financial analogy. Pretty much everyone understands the concept of compound interest as it applies to both debt and savings.

I framed the release of small but functional components as the equivalent of small, regular deposits to a retirement account, where benefits start to accrue immediately and then build upon each other with every "deposit". I framed the holding off of a major project until completion as the accumulation of debt with no payment plan. I also pointed out that, like a sound investment strategy, the "portfolio" of features might require adjustment over time in order to meet objectives under changing circumstances, adding substantial risk to any monolithic project.

  1. I'm a programmer, so I must know how to get X done in Y software.

  2. I don't use or so I'm some kind of Luddite and can't possibly know anything useful about computers.

One thing that fascinates me about #1 is that the absolute raw dependency people have on Google doesn't seem to ever lead to searching for a tutorial.

1 more...

He brought me much joy in tinkering, first with Pascal, then with Oberon.

In looking up and then reading that article, I discovered that not only has Oberon been actively maintained, but that there is a successor, A2. Now that I'm back to being a hobbyist, I look forward to more joyful tinkering courtesy of his great mind.

Edit: in the course of further investigation, I found many dead links. But I also found this A2 repository that shows activity from as recently as 2 months ago.

But typically when a field becomes more affordable, it goes up in demand, not down, because the target audience that can afford the service grows exponentially.

I've always been very up front with the fact that I could not have made a career out of programming without tools like Delphi and Visual Basic. I'm simply not productive enough to have to also transcribe my mental images into text to get useful and productive UIs.

All of my employers and the vast majority of my clients were small businesses with fewer than 150 employees and most had fewer than a dozen employees. Not a one of them could afford a programmer who had to type everything out.

If that's what happens with AI tooling, then I'm all for it. There are still far too many small businesses, village administrators, and the like being left using general purpose office "productivity" software instead of something tailored to their actual needs.

Is that not what we are already?

We are skilled in setting out instructions for another piece of software to follow in producing the appropriate sequence of 1s and 0s.

Software development will always be about discovering requirements, reconciling inconsistencies and contradictions, and describing them in a way that comes out with something useable.

There will always be people more or less skilled at that. I'm somewhere near the bottom of mere competence. Despite my best efforts, I'm getting further from mastery, not closer, and I really don't see that changing just because of ChatGPT. In fact , I've looked at the prompts people are using to get useful code and feel even further behind than ever!

This comment started as just little joke, but now I'm not so sure.

1 more...

It's been so long since I first read that that I forgot about this section:

Operation activation

Another example that illustrates our strategy is the activation of operations. Programs are not executed in Oberon; instead, individual procedures are exported from modules. If a certain module M exports a procedure P, then P can be called (activated) by merely pointing at the string M.P appearing in any text visible on the display, that is, by moving the cursor to M.P and clicking a mouse but- ton. Such straightforward command activation opens the following possibilities:

  1. Frequently used commands are listed in short pieces of text. These are called tool-texts and resemble customized menus, although no special menu software is required. They are typically displayed in small viewers (windows).
  1. By extending the system with a simple graphics editor that provides captions based on Oberon texts, commands can be highlighted and otherwise decorated with boxes and shadings. This results in pop-up and/or pull-down menus, buttons, and icons that are “free” because the basic command activation mechanism is reused.
  1. A message received by e-mail can contain commands as well as text. Commands are executed by the recipient’s clicking into the message (without copying into a special command window). We use this feature, for example, when announcing new or updated module releases. The message typically contains receive commands followed by lists of module names to be downloaded from the network. The entire process requires only a few mouse clicks.

Anyone remember the Melissa worm? Or perhaps been negatively affected by clicking a link in an email?

Every convenience comes at a cost. I wonder if he ever revisited that concept with an eye to how similar capabilities became the bane of our existence.

I call that the "nerd equivalency problem". I think it's the source of much (most? all?) of the problems with software that comes out of organizations that are not programming shops by nature.

"We're not moving fast enough (or, "I have this great idea!"), hire another nerd!"

The problem also exists within individual programmers ("sure, I can do that UX/UI thingy, just let me finish building this ray-tracing thingy"), but that's just an ordinary cognitive weakness that affects us all (thinking that being expert in one field makes one expert in all). It's the job of proper leadership to resist that, not act as though it's true.

Sweet! That means I can encrypt my code to keep prying eyes away.

You had me at "BASIC"! I'm going to check it out.

I think that BASIC has historically been my most productive language. My favourite implementation was something called "Z-Basic", a compiled BASIC with device-independent graphics that could run on and target Apple//, Mac, and PC.

3 more...

All those assembly language instructions are just mnemonics for the actual opcodes. IIRC, on the 6502 processor family, JSR (Jump to SubRoutine) was hex 20, decimal 32. So going deeper would be really limited to not having access to the various amenities provided by assembler software and writing the memory directly. For example:

I started programming using a VIC-20. It came with BASIC, but you could have larger programs if you used assembly. I couldn't afford the assembler cartridge, so I POKED the decimal values of everything directly to memory. I ended up memorizing some of the more common opcodes. (I don't know why I was working in decimal instead of hex. Maybe the text representation was, on average, smaller because there was no need of a hex symbol. Whatever, it doesn't matter...)

VIC-BASIC had direct memory access via PEEK (retrieve value) and POKE (set value). It also had READ and DATA statements. READ retrieved values from the comma-delimited list of values following the DATA statement (usually just a big blob of values as the last line of your program).

I would write my program as a long comma-delimited list of decimal values in a DATA statement, READ and POKE those values in a loop, then execute the resulting program. For small programs, I just saved everything as that BASIC program. For larger programs, I wrote those decimal values to tape, then read them into memory. That let me do a kind of modular programming by loading common functions from tape instead of retyping them.

I was in the process of writing my own assembler so that I could use the mnemonics directly when I got my Apple //c. More memory and the availability of quite a few high level languages derailed me and I haven't touched assembly since.

Or maybe just a powder balance. When I was a kid in the 1960s, Dad did a lot of reloading his own ammunition. We kids had fun doing things like weighing our names (weigh a small piece of paper to get the tare, weigh again to get the loaded weight, subtract) and other miniscule things. As I recall, it was accurate to less than a grain (0.065 gram).

One of the things we did was weigh different shapes of paper to calculate area. Start with a sample of a unit area. Cut out a funky shape and weigh it, then do the math.

Bonus: it sounds like a scream of terror.

That is something I just don't get. I'm a hobbyist turned pro turned hobbyist. The only people who I ever offered my services to were either after one of my very narrow specialties where I was actually an expert or literally could not afford a "real" programmer.

I never found proper security to have any impact on my productivity. Even going back to my peak years in the first decade of this century, there was so much easily accessible information, so many good tutorials, and so many good products that even my prototypes incorporated the basics:

  • Encrypt the data at rest
  • Encrypt the data in transit
  • No shared accounts at any level of access
  • Full logging of access and activity.
  • Before rollout, back up and recovery procedures had to be demonstrated effective and fully documented.

Edited to add:

It's like safety in the workplace. If it's always an add-on, it will always be of limited effectiveness and reduce productivity. If it's built in to the process from the ground up, it's extremely effective and those doing things unsafely will be the productivity drain.

5 more...