chaospatterns

@chaospatterns@lemmy.world
2 Post – 42 Comments
Joined 1 years ago

The alternative is to let certain countries de facto claim a region because others are too afraid to call them on their BS

11 more...

For those who aren't aware. This is talking about when cell phones roam into other networks, they now encrypt the traffic back to the home provider which means law enforcement struggle to tap it (legally or illegally).

PET is privacy enhancing technologies

Fears raised over ‘Chinese spy cranes’ in US ports

There are concerns that the machines are effectively Trojan Horses for Beijing and could be used to sabotage sensitive logistics

Unexplained communications equipment has been found in Chinese-made cranes in US ports that could be used for spying and potentially “devastate” the American economy, according to a new congressional investigation.

The finding, first reported by The Wall Street Journal (WSJ), will stoke American concerns that the cranes are effectively Trojan Horses for Beijing to gain access to, or even sabotage, sensitive logistics.

The probe by the House Committee on Homeland Security and the House select committee on China found over a dozen pre-installed cellular modems, that can be remotely accessed, in just one port.

Many of the devices did not seem to have a clear function or were not documented in any contract between US ports and crane maker ZPMC, a Chinese state-owned company that accounts for nearly 80 per cent of ship-to-shore cranes in use in America, according to the WSJ.

The modems were found “on more than one occasion” on the ZPMC cranes, a congressional aide said.

“Our committees’ investigation found vulnerabilities in cranes at US ports that could allow the CCP [Chinese Communist Party] to not only undercut trade competitors through espionage, but disrupt supply chains and the movement of cargo, devastating our nation’s economy,” Mark Green, the Republican chair of the House Homeland Security Committee, told CNN.

The Chinese government is “looking for every opportunity to collect valuable intelligence and position themselves to exploit vulnerabilities by systematically burrowing into America’s critical infrastructure,” he told the WSJ, adding that the US had overlooked the threat for too long.

The Telegraph has contacted ZPMC for comment.

‘The new Huawei’

A spokesman for the Chinese embassy in Washington DC said claims that Chinese-made cranes pose a security risk are “entirely paranoia.”

The US investigation began last year amid Pentagon fears that sophisticated sensors on large ship-to-shore cranes could register and track containers, offering valuable information to Beijing about the movement of cargo supporting US military operations around the world.

At the time, Bill Evanina, a former top US counterintelligence official, said: “Cranes can be the new Huawei.”

“It’s the perfect combination of legitimate business that can also masquerade as clandestine intelligence collection,” he told the WSJ.

In recent years, a handful of Chinese crane companies have grown into major players in the global automated ports industry, working with Microsoft and other companies to connect equipment and analyse data in real-time.

1 more...

It's true that Mozilla does collect telemetry and that Mozilla Corp is for profit, however Mozilla Corp is owned by Mozilla Foundation. That ownership structure is either a way to get around limitations on non profits, or its an opportunity for the Foundation to directly influence the Corp to be better.

However, I'll still use Firefox/Thunderbird because: Usage stats such as number of accounts or filters is in no way comparable to my username and password. One is basic metadata and stats, the other is a massive risk. You can opt out of the telemetry, the only way to opt out of sharing your password is to not use the new Outlook.

I take a more pragmatic approach to privacy based on my trust. I understand the value of telemetry, but change it depending on the company. Big Tech I have less trust in, Mozilla, while they have issues, are on average far better for privacy vs big tech.

As a developer, I understand the value of telemetry and the risks that come with collecting any data. I pick Firefox because it challenges the homogeney of Google's influence and it looks like I'm going to pick Thunderbird because I' haven't seen a better option.

9 more...

Also, the law requires that publicly traded companies be greedy

The law doesn't actually state you need screw over your customers and maximize profit. It says that executives have a fiduciary duty, which means they must act in the best interest of the shareholder, not themselves.

That does not mean they have to suck out every single dollar of profit. Executives have some leeway in this and can very easily explain that napkins lead to happier customers and longer term retention which means long term profits.

It's purely a short-term, wall street driven, behavior also driven by executive pay being also based in stock so they're incentivized to drive up the price over the next quarter so they can cash out.

.net core is the future but Mono is still important for running legacy .net framework applications like ones that use WinForms or WPF. That's pretty much it. Anything new should go straight to .net core.

2 more...

Attestation depends on a few things:

  1. The website has to choose to trust a given attestation provider. If Open Source Browser Attestation Provider X is known for freely handing out attestations then websites will just ignore them
  2. The browser's self-attestation. This is tricky part to implement. I haven't looked at the WEI spec to see how this works, but ultimately it depends on code running on your machine identifying when it's been modified. In theory, you can modify the browser however you want, but it's likely that this code will be thoroughly obfuscated and regularly changing to make it hard to reverse engineer. In addition, there are CPU level systems like Intel SGX that provide secure enclaves to run code and a remote entity can verify that the code that ran in SGX was the same code that the remote entity intended to run.

If you're on iOS or Android, there's already strong OS level protections that a browser attestation can plugin to (like SafetyNet.)

I think this a problem with applications with a privacy focused user basis. It becomes very black and white where any type of information being sent somewhere is bad. I respect that some people have that opinion and more power to them, but being pragmatic about this is important. I personally disabled this flag, and I recognize how this is edging into a risky area, but I also recognize that the Mozilla CTO is somewhat correct and if we have the option between a browser that blocks everything and one that is privacy-preserving (where users can still opt for the former), businesses are more likely to adopt the privacy-preserving standards and that benefits the vast majority of users.

Privacy is a scale. I'm all onboard with Firefox, I block tons of trackers and ads, I'm even somebody who uses NoScript and suffers the ramifications to due to ideology reasons, but I also enable telemetry in Firefox because I trust that usage metrics will benefit the product.

Totally. I used to contribute to Google maps quite a bit and got higher up in the Local Guides levels, but now I find myself contributing a lot to OSM. I feel a lot better about contributing to an open platform vs letting a company close up my changes.

I just haven't made the switch to use it as a mobile client yet

It's not generally a hardware problem. It's a resourcing problem. Companies like GitHub will have complex software and architecture. IPv6 requires them to get a pool of IP addresses, come up with an IP address management strategy, make sure all hosts have IPv6 addresses meaning that now provisioning systems and tooling to management DNS has to plumb IPv6 addresses through too.

Then the software stack has to support it. Maybe their fraud detection or auditing systems have to now support IPv6 which means changes to API schemas.

None of this is a good reason why they shouldn't do it, but I've had to make similar decisions at my job as a software engineer on what looks to be simple but actually requires changes across systems.

What is your threat model or goal? It could hide the device you use to connect to the instance, however a lot of actions you do on Lemmy, including all upvotes, are public to other instances.

First the basics. Connection refused means that nothing is running on "http://192.168.0.2:8020/"

  1. Is 192.168.0.2 the IP address of the Django container? If it's the host's IP, does docker ps show that the port is bound to the host? e.g. 0.0.0.0/8082->8082

Confirmed upstream block container is running and on the right exposed port

What steps did you do to confirm that this is running?

4 more...

The problem with Grocy is that going too fine grained means you're unlikely to keep it up to date or it be accurate. I would not try to track your usage in ml. Just track it at the bottle level.

However you can still track the price per ml because grocy lets you independently set units. Just define a mapping between bottle and ml.

1 more...

That would be illegal. I worked on the software deployment of these devices in a store. If we increased the price, we'd automatically give the customer the lowest price in the last several hours.

The other problem was they were extremely low powered and low bandwidth and it would have killed the battery to update more than a few times a day.

Doubtful. TicketMaster is there to take the bad PR but was designed to get as many fees and funnel part of those fees to the artist. Yes TM has deals with Live Nation that basically force big artists to use them because they have the big stadiums, but Taylor Swift is a massive artist she has tons of lawyers and can negotiate fees.

As much as I love Taylor Swift, I have no doubt that she is massively benefiting from the high ticket prices.

Or just used wired connections. This is targeting wifi cameras and doorbells.

Accidentally typo your password and get blocked. And if you're tunneling over tor, you've blocked 127.0.0.1 which means now nobody can login.

3 more...

Paperless does support defining a folder structure that you can use to organize documents within that paperless media volume however you should treat it as read only.

OP could use this as a way to keep their desired folder structure as much as possible, but it would have to be separate from the consumption folder.

Why is telemetry useful or why is it needed to use pi-hole to block telemetry?

Telemetry is useful to know what features your customers use. While it's great in theory to have product managers who dogfood and can act on everyone's behalf, the reality is telemetry ensures your favorite feature keeps being maintained. It helps ensure the bugs you see get triaged and root caused.

Unfortunately telemetry has grown to mean too many things for different people. Telemetry can refer to feature usage, bug tracking, advertising, behavior tracking.

Is there evidence that even when you disable telemetry in Firefox it still reports telemetry? That seems like a strong claim for Firefox.

Right, it's a lot better to give somebody a better alternative first if you want the public on board. Build up public transit, build up regional and high speed rail and leave planes for long distances that are unfortunately suited for trains and cars (e.g. international, cross-continental, etc.)

This is my biggest challenge with this extension. What's clickbait to one person is not to another. Several times I've come across titles that get mangled when rewritten to lose key points. Or the image gets replaced with a random screen grab. There's a difference between somebody doing the YouTube face and a title with "the craziest stunt you've ever seen" and an artist photo with a title saying the "a crazy stunt jump through a burning hoop". I'm okay with the latter but dearrow will often remove crazy. The is just an contrived example

One person could still say "crazy" makes it clickbait, but having some adjectives are fine

You can sign into multiple accounts into the same website in different tabs. I use this to be able to sign into many different AWS accounts for work where AWS doesn't natively support this.

Yes fiduciary duty to the shareholder is sometimes misunderstood but this is in scope.

Everything can be securities fraud:

https://archive.is/p2YHV

Or:

https://www.bloomberg.com/opinion/articles/2019-06-26/everything-everywhere-is-securities-fraud

NoScript enables you to enable or disable WebGL per site. If you don't want to deal with the hassle of websites being broken, you can set the default to enable JS but disable WebGL then set applications to be trusted with WebGL.

If you're running Docker for servers not development, then you can make Hyper-V work. I used to do that before I got a separate Linux server and it worked out.

Just setup a network adapter that gets bridged to your Ethernet adapter, then create a VM that uses that bridged adapter. The Linux VM will appear like its another computer on your LAN and you can use Docker with host Network.

Ah yeah. Mono didn't support WPF, but Mono did support running WinForms apps natively on Linux without using Wine.

That's not because you have a wildcard. That's because you need to implement DKIM, DMARC, and SPF records to prevent others from using your domain name to send mail.

MTAs use those standards to verify if somebody is permitted to send email for your domain. If you don't have those set then you can get what that ISP described.

2 more...

There was an uncaught exception to boot gunicorn workers

That's odd that it didn't cause the Docker container to immediately exit.

What now? So now that it looks like everything is working. What is the best practice for the nginx.conf? Leave it all in /etc/nginx/nginx.conf (with user as root), reestablish the out box nginx.conf and /etc/nginx/conf.d/default.conf

My suggestion would be to create /etc/nginx/conf.d/mycooldjangoapp.conf. Compared to conf.d/default.conf, this is more intuitive if you start hosting multiple apps. Keep it out of the nginx.conf because apt-get or other package managers will usually patch that with new version changes and again it gets confusing if you have multiple apps.

2 more...

If you are port forwarding. I recommend not exposing it on the default port of 25565 and instead expose it as a random port. Then, assuming you have a domain name, create an SRV record that points to your IP and port. This will cut down on the drive by scanners who scan by ports, but won't totally eliminate it. If you do use the SRV record, your friends won't even notice there's a different port.

I have Wireguard and I forward DNS and my internal traffic from my phone over the VPN to my pi-hole at home. All other traffic goes directly over the Internet, not the VPN. So that means only DNS encounters higher latency.

However, because a lot of companies do DNS based geo load balancing that means even if I'm on the east coast all my traffic gets sent to the West Coast because my DNS server is located there. That right there has the biggest impact on latency.

It's tolerable on the same continent, but once I start getting into other continents then it gets a bit slow.

4 more...

Sorry, what do you mean route it directly? Maybe I didn't clarify well enough.

My DNS is routed over the VPN but Internet traffic is routed directly. The problem is the load balancing is done based on where the DNS server is so say Google even though the traffic egresses directly to the internet bypassing the VPN it still goes to a Google DC near my home. Not all websites do this so its not always an issue.

There's two main ways of doing geo-based load balancing:

  1. IP Any-casting - In this case, an IP address is "homed" in multiple spots and through the magic of IP routing, it arrives at the nearest location. This is exactly how 1.1.1.1 and 8.8.8.8 work. It works fine for stateless packets like DNS, however it has some risks for stateful traffic like HTTP.
  2. DNS based load balancing. A server receives a request for "google.com", looks at the IP of the DNS server and/or the EDNS Client IP in the DNS query packet and returns an IP that's near. The problem is that when you're doing Wireguard, it goes phone -> pi-hole (source IP is some internal IP) -> the next hop (e.g. 1.1.1.1 or 8.8.8.8), which sees the packet is coming from your home/pi-hole's public IP. Thus it gets confused and thinks you're in a different location than you really are. Neither of these hops really knows your true location of your phone/mobile device.

Of course, this doesn't matter for companies that only have one data center.

Amazon corporate employees get RSUs which are stocks, not options. After the new hire RSUs go away, you end up with two vest dates a year and new comp offerings start the following year (so in 2024 you'll see new money in 2025 plus a small base salary bump that goes in effect that month).

Tech salaries are frequently stock based, but Amazon's is unusual in that it's only twice a year, and bumps start the following year, and they recently made the change to do 2 year offers instead of 3 years.

Then try something like:

Create Quanity unit of ml and a liter unit

In your product use: Unit stock: bottle or liter Unit purchase: bottle Consume: ml Price unit: ml

Set a product specific QU conversion of bottle to ml

Weirdly, the quick consume unit is based on the stock unit, not the consume unit. That seems like a bug.

Things that can be composted are usually food waste or food spoiled papers not treated with chemicals. Paper is hard to recycle because it can only recycled into lower quality paper, frequently gets contaminated, and it's hard to seperate out from everything else.

Thus if something is compostable I believe it's better to compost than to recycle that same material.

As a professional software dev, I worked with pretty much every OS daily. My personal computer was a Windows, my work laptop was a Mac, and I ran my code on Linux so I was familiar with the things I liked and disliked about each. I also ran my own set of server with my websites, mail servers, and various research projects to learn and grow.

Then I decided it was time to order a new laptop and I didn't want to go to Windows 11 because I felt Microsoft was going too much into features I didn't want like Ads, more tracking, pushing AI. Don't get me wrong, I like AI, but it was too much about forcing me to use it to justify their stock valuations.

I also was working on reducing my usage of big tech, setting up self hosted services like pi-hole, Home Assistant, starting to work my own Mint alternative. It just felt natural to get a Framework laptop and try running Linux on it.

I still have a Windows desktop for games and other things, I still use Mac at work. I still like the Mac for it's power efficiency and it doesn't get as hot. Linux has some annoyances here and there, like dbus locking up, or weird GNOME issues, or for a while my screen would artifact until set some kernel params, or the fact that my wifi card would crash and I had to replace it with an Intel card, but I'll stick with it.

If I create a secondary config as you are suggesting, wouldn’t it create a conflict with the server blocks of default.conf

No, you can have multiple server blocks with the same listen directive. They just need to differ by their server_name and only one server block can contain default_server; Reference

NGINX will use the server_name directives to differentiate the different backend services. This is a class virtual host configuration model.

I don't fully understand what you're saying, but let's break this down.

Since you say you get an NGINX page, what does your NGINX config look like? What exactly does the NGINX "login page" say? Is it an error or is it a directory listing or something else?

They could and clearly they should have done that but hindsight is 20/20. Software is complex and there's a lot of places that invalid data could come in.