akash_rawal

@akash_rawal@lemmy.world
5 Post – 45 Comments
Joined 13 months ago

I actually like this. This would allow reuse of all the infrastructure we have around XML. No more SQL injection and dealing with query parameters? Sign me up!

4 more...

You have rust.

You get a horse and arrive at the castle within seconds but the horse is too old and doesn't work with the castle.

You remove the horse, destructure the castle and rescue the princess within seconds, but now you have no horse.

While you're finding a compatible horse and thinking whether you should write your own horse, Bowser recaptures the princess and moves her to another castle.

Replacing "Programmers:" with "Program:" is more accurate.

::: spoiler spoiler Tower of Hanoi is actually easy to write program for. Executing it on the other hand... :::

2 more...

How I lost a Postgres database:

  1. Installed Postgres container without configuring a volume
  2. Made a mental note that I need to configure a volume
  3. After a few days of usage, restarted the container to configure the volume
  4. ...
  5. Acceptance
1 more...

That's too much effort. Just advertise the CVE fix and let a paying customer do the effort.

Not caring about privacy is one thing. There is also a network effect; that is caring about privacy leads to poorer contact with family, friends, and people they care about. Privacy has been correlated with disadvantage.

The sad thing about it is that none of it is natural, the big wigs have rigged it this way. Sometimes I feel like the only winning move is to choose your peers, and if you cannot choose your peers, you cannot win this game.

Together we can make this happen!

Gosh, if I ever get into the business of writing software for spacecraft with long duration missions, I have to test for such cases.

1 more...

I don't like the mess some software makes when it install in my system

I gave up bothering about this a decade ago and I just store my files elsewhere while software treat the home directory as 'application data'.

1 more...

I plan to have 2 switches.

Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.

Nothing will ever top "Galaxy Note 7". Super fun in planes, especially if they're flying.

I didn't know the answer either, but usually you can compose solution from solutions of smaller problems.

solution(0): There are no disks. Nothing to do. solution(n): Let's see if I can use solution(n-1) here. I'll use solution(n-1) to move all but last disk A->B, just need to rename the pins. Then move the largest disk A->C. Then use solution(n-1) to move disks B->C by renaming the pins. There we go, we have a stack based solution running in exponential time.

It's one of the easiest problem in algorithm design, but running the solution by hand would give you a PTSD.

There would be some quality-of-life improvements like being able to replace a switch without pulling down entire cluster, but it is mostly for a challenge.

1 more...

We can say default is and and add an Or node for or. Similar to SoP notation, you only write +.

Well, we could allow root login via passwordless telnet so that they can be extra sure that we aren't hiding anything.

Based on the title, I misunderstood as "oh shit I messed up real bad while booted into this ISO". (I have that one too.)

Until recently I have been switching between Ubuntu ISO and custom Arch ISO. Now I have a regular Arch install on a fast USB drive for repairs (not ISO).

Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.

(All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)

TPM stores the encryption key against secure boot. That way, if attacker disables/alters secure boot then TPM won't unseal the key. I use clevis to decrypt the drive.

Technically, containers always run in Linux. (Even on windows/OS X; on those platforms docker runs a lightweight Linux VM that then runs your containers.)

And I wasn't even using Docker.

I use rsync+btrfs snapshot solution.

  1. Use rsync to incrementally collect all data into a btrfs subvolume
  2. Deduplicate using duperemove
  3. Create a read-only snapshot of the subvolume

I don't have a backup server, just an external drive that I only connect during backup.

Deduplication is mediocre, I am still looking for snapshot aware duperemove replacement.

4 more...

I just keep my stuff far away from $HOME and not bother about the junk. Not even a subdirectory under $HOME.

Same goes for ' My documents' on windows.

Just did some basic testing on broadcast addresses using socat, broadcast is not working at all with /32 addresses. With /24 addresses, broadcast only reaches nodes that share a subnet. Nodes that don't share the subnet aren't reachable by broadcast even when they're reachable via unicast.

Edit1: Did more testing, it seems like broadcast traffic ignores routing tables.

On 192.168.0.2, I am running socat -u udp-recv:8000,reuseaddr - to print UDP messages.

Case 1: add 192.168.0.1/24

# ip addr add 192.168.0.1/24 dev eth0
# # Testing unicast
# socat - udp-sendto:192.168.0.2:8000 <<< "Message"
# # Worked
# socat - udp-sendto:192.168.0.255:8000,broadcast <<< "Message"
# # Worked

Case 2: Same as above but delete 192.168.0.0/24 route

# ip addr add 192.168.0.1/24 dev eth0
# ip route del 192.168.0.0/24 dev eth0
# # Testing unicast
# socat - udp-sendto:192.168.0.2:8000 <<< "Message"
2024/02/13 22:00:23 socat[90844] E sendto(5, 0x5d3cdaa2b000, 8, 0, AF=2 192.168.0.2:8000, 16): Network is unreachable
# # Testing broadcast
# socat - udp-sendto:192.168.0.255:8000,broadcast <<< "Message"
# # Worked
3 more...

I run a crude automation on top of OpenSSL CA. It checks for certain labels attached to kubernetes services. Based on that it creates kubernetes secrets containing the generated certificates.

2 more...

I just uploaded to Github: https://github.com/akashrawal/nsd4k

I only made it for myself, so expect very rough edges in there.

https://imgur.com/8p7oPbp

::: spoiler spoiler It's fine guys, I ran it in a VM with no networking. :::

That error doesn't occur if you only ever copy the code as-is taps head

It is for a challenge, the goal is to build a cloud with workload decoupled from servers decoupled from users who'd deploy the workload, with redundant network and storage, no single choke point for network traffic, and I am trying to achieve this with a small budget.

I keep Arch Linux installed on a USB drive, no ISO for me.

I was thinking along the lines of

Plenty of libraries can build the XML using structs/classes. e.g. with serde:

//Data type for row
#[derive(serde::Serialize)]
pub struct Foo {
	pub status: String,
	pub name: String,
}

//Example row
let ent = Foo {
    status: "paid".into(),
    name: "bob".into(),
}

//Example execution
sqlx::query(&serde_xml_rs::to_string(&InsertStmt{
	table: "foo".into(),
	value: &ent,
})?).execute(&conn)?;

Or with jackson-dataformat-xml:

//Data type for row
public class Foo {
    public string status;
    public string name;
}

//Example row
Foo ent = new Foo();
foo.status = "paid";
foo.value = "bob";

//Example execution
XmlMapper xmlMapper = new XmlMapper();
String xml = xmlMapper.writeValueAsString(new InsertStmt("foo", ent));
try (Statement stmt = conn.createStatement()) {
    stmt.executeUpdate(xml)
}

I don't do JS (yet) but maybe JSX could also do similar things with XML queries.

No more matching $1, $2, ... (or ? for mysql) with individual columns, I could dump entire structs/objects into a query and it would work.

This was me on a bank's site till I clued in that I need to shorten my password.

Here is a trick that has been tried and tested over the years: Install another distro, and use that to install Arch. This way, you can rely on an already working linux distro till your Arch install works the way you want.

Yes, the entire network is supposed to be redundant and load-balanced, except for some clients that can only connect to one switch (but if a switch fails it should be trivial to just move them to another switch.)

I am choosing dell optiplex boxes because it is the smallest x86 nodes I can get my hands on. There is no pcie slot in it other than m.2 SSD slot which will be used for SSD.

That advertisement would be interpreted as Node C's advertisement.

The plan is to treat public keys as node's identity and trust mechanism similar to OpenPGP (e.g. include any node key signed by a master key as a cluster member)

Right now, none of the encryption part is done and it is not a priority right now. I need to first implement transitive node detection, actually forward packets between nodes, some way to store and manage routes, and then trust and encryption mechanisms before I'd dare to test this stuff on a real network.

If redundant everything is important then you need to change your planning toward proper rack servers and switches

I ain't got that budget man.

2 more...

The level1 video shows thunderbolt networking though. It is an interesting concept, but it requires nodes with at-least 2 thunderbolt ports in order to have more than 2 nodes.

It might be a failing fan. I have an Intel nuc whose fan started sounding like an air raid siren, so I took the fan out, drilled a hole into its bearing and added coconut oil into it. It is working fine till this date, but buying a new fan is probably better.

If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine's nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine's DB instead of its own DB.

I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.

2 more...

I was writing my own compose files, but see my response to a sibling comment for the issue I had.

don't create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.

This was the setup I had, but now I am already using kubernetes with no intention to switch back.