Sarah Asakura

@Sarah Asakura@lemmy.blahaj.zone
0 Post – 7 Comments
Joined 1 years ago

Hello! I'm a lot of things, but most fundamentally I'm a person who is terrible at writing bios

It is a curious thing - I think for the foreseeable future, we'd likely still track age via Earth years for the sake of avoiding this kind of thing. I wonder if/when we do colonize, how long would it take for the Martians to actually switch. How much would our cultures have drifted by that point?

And a related shower thought - until relatively recently (late 1960s), any definition we used to track time has been crazy localized to our planet, compared to other things in math and science where many of those concepts would still work the same elsewhere

9 more...

Nah, there have been some blogs recently from engineers who were bucking the Microservice trend - Notably Amazon Prime Video moved back to more of a monolith deployment and saw performance improvement and infrastructure cost reduction

https://www.linkedin.com/pulse/shift-back-monolithic-architecture-why-some-big-making-boudy-de-geer

I wouldn't say anything is wrong with them, the pros and cons have been there, but the cons are starting to be more recognized by decision makers

2 more...

Expanding a bit on what others have said, for anybody who is further interested (simplified; this whole discussion could be pages and pages of explanation)

The code we write (source code), and the code that makes the machine do its thing (executable code) are usually very different, and there are other programs (some are compilers, others are interpreters, I'm sure there are others still) to help translate. Hopefully my examples and walkthrough below help illustrate what others have meant by their answers and gives some context on how we got to where we are, historically

At the bare metal/electricity flowing through a processor you're generally dealing with just sequences of 0s and 1s - usually called machine code. This sequence of "on" and "off" is the only thing hardware understands, but is really hard for humans, but it's all we had at first. A program saved as machine code is typically called a binary (for formattings sake, I added spaces between the bytes/groups of 8 bits/binary digits)

00111110 00000001 00000110 00000001 10000000 00100110 00000000 00101110 00000000 01110111

A while later, we started to develop a way of writing things in small key words with numerical values, and wrote a program that (simplified) would replace the key words with specific sequences of 0s and 1s. This is assembly code and the program that does the replacements is called an assembler. Assemblers are pretty straight forward and assembly is a close 1:1 translation to machine code; meaning you can convert between the two

LD A, 0x01
LD B, 0x01
ADD A,B
LD H, 0x00
LD L, 0x00
LD (HL), A

These forms of code are usually your executable codes. All the instructions to get the hardware to do its thing are there, but it takes expertise to pull out the higher level meanings

This kind of writing still gets tedious and there are a lot of common things that you'd do in assembly that you might want shortcuts for. Some features for organization got added to assembly, like being able to comment code, add labels, etc but the next big coding step forward was to create higher level languages that looked more like how we write math concepts. These languages typically get compiled, by a program called a compiler, into machine code, before the code can run. Compilers working with high level languages can detect a lot of things and do a lot of tricks to give you efficient machine code; it's not so 1:1

This kind of representation is what is generally "source code" and has a lot of semantic things that help with understandability

int main() {
  int result = 1+1;
}

There are some, very popular, high level languages now that don't get compiled into machine code. Instead an interpreter reads the high level language and interprets it line by line. These languages don't have a compilation step and usually don't result in a machine code file to run. You just run the interpreter pointing to the source directly. Proprietary code that's interpreted this way usually goes through a process called obfuscation/minimization. This takes something that looks like:

def postCommentAction(commentText, apiConnection) {
  apiConnection.connect()
  apiConnection.postComment(commentText, userInfo)
  apiConnection.close()
}

And turns it into:

def func_a(a,b){b.a();b.b(a,Z);b.c();}

It condenses things immensely, which helps performance/load times, and also makes it much less clear about what the code is doing. Just like assembly, all the necessary info is there to make it work, but the semantics are missing

So, to conclude - yes, you can inspect the raw instructions for any program and see what it's doing, but you're very likely going to be met with machine code (that you can turn into assembly) or minified scripts instead of the kind of source code that was used to create that program

My weekend was pretty great! Today's the end of a juat-over-a-week long vacation for me

Last weekend, I visited a friend and got to meet some new people. They're the first group who only know me by my chosen name, that I've only been using with close friends for a few months. It's been nice knowing that

Last night, a friend (one of the close ones who's in on my chosen name) and I went to Philly to see a pro wrestling show. The crowd was great, and there was a lot of love and acceptance and support

I'm looking forward to getting back to work and back into my normal routines, with thankfully a shorter work week ahead

1 more...

I've got a 20 month old at home, but thankfully my wife let me visit a friend. They were going to come with, but ended up staying home and seeing my Mom

I feel you on the anxiety, I'm very much an introvert and awkward around new people. I got lucky with my friend being extroverted. Spent a lot of the night sitting quietly and listening, but eventually they got me to open up

I started on Digg in the summer/fall of 2005 right around the launching point of Diggnation; maybe 5-10 episodes in. A friend got me introduced to that. I was there until September of 2010 and then made the move to reddit as a part of the Great Digg Migration, and now find myself here on the fediverse

I've never been a very active contributor, but still felt connected and enjoyed seeing the conversations and links that people were sharing

My primary languages are Java (for work), Javascript (for work), and C/C++ (for hobbies). Earlier in my career, I used to use the debugger a lot to help figure out what's going on when my applications were running, but I really don't reach for it as a tool anymore. Now, I'll typically gravitate towards either logging things (at a debug level that I can turn on and off at runtime) or I'll write tests to help me organize my thoughts, and expectations

I don't remember when, or if ever, I made the deliberate decision to switch my methodology, but I feel like the benefit of doing things in logging or tests gives me two things. 6 months later, I can look back and remind myself that I was having trouble in that area; it can remind me how I fixed it too. Those things also can serve as a sanity check that if I'm changing things in that area, I don't end up breaking it again (at least breaking it in the same way)

With that said, I will reach for the debugger as a prototyping tool. IntelliJ IDEA (and probably other Java debuggers) allow you to execute statements on-the-fly. I'll run my app up to the point where I know what I want to do, but don't know exactly how to do it. Being able to run my app up to the point of that, then pause it and get to try different statements to see what comes back has sped up my development pretty well. Logging or testing don't really apply for that kind of exploration, and pausing to run arbitrary statements beats the other options in how quickly and minimally that exploration can be done