sorter_plainview

@sorter_plainview@lemmy.today
1 Post – 43 Comments
Joined 9 months ago

Since I am not from the western hemisphere, I find it difficult to understand what is wrong with the name. Is it just that it sounds bad? Or any other reason?

3 more...

Oh fun fact, Govt also issued an order stating that VPN providers who won't log information of users, can't function in India.

10 more...

This is something people always miss in these discussions. A graphic designer working for a medium marketing company is replaceable with a Stable Diffusion or Midjourney, because there, quality is not really that important. They work on quantity and "AI" is much more "efficient" in creating the quantity. That too even without paying for stock photos.

High end jobs will always be there in every profession. But the vast majority of the jobs in a sector do not belong to the "high end" category. That is where the job loss is going to happen. Not for Beeple Crap level artists.

3 more...

I completely agree with this. I work as a User Experience researcher and I have been noticing this for some time. I'm not a traditional UX person, but work more at the intersection of UX and Programming. I think the core problem when it comes to discussion about any software product is the people talking about it, kind of assuming everyone else functions the same.

What you mentioned here as a techie, in simple terms is a person who uses or has to use the computer and file system everyday. They spend a huge amount of time with a computer and slowly they organise stuff. And most of the time they want more control over their stuff, and some of them end up in Linux based systems, and some find alternative ways.

There are two other kinds of people. One is a person who uses the computer everyday but is completely limited to their enterprise software. Even though they spend countless hours on the computer, they really don't end up using the OS most of the time. A huge part of the service industry belongs to this group. Most of the time they have a dedicated IT department who will take care of any issue.

The third category is people who rarely use computers. Means they use it once or twice in a few days. Almost all the people with non-white collar jobs belong to this category. This category mainly uses phones to get daily stuff done.

If you look at the customer base of Microsoft, it's never been the first. Microsoft tried really hard with .NET in the Balmer era, and even created a strong base at that time, but I am of the opinion that a huge shift happened with wide adoption of the Internet. In some forum I recently saw someone saying, TypeScript gave Microsoft some recognition and kept them relevant. They made some good contributions also.

So as I mentioned the customer base was always the second and third category. People in these categories focus only on getting stuff done. Bare minimum maintenance and get results by doing as little as possible. Most of them don't really care about organising their files or even finding them. Many people just redownload stuff from email, message apps, or drives, whenever they need a file. Microsoft tried to address this by indexed search inside the OS, but it didn't work out well because of the resource requirements and many bugs. For them a feature like Recall or Spotlight of Apple is really useful.

The way Apple and even Android are going forward is in this direction. Restricting the user to the surface of the product and making things easy to find and use through aggregating applications. The Gallery app is a good example. Microsoft knew this a long back. 'Pictures', 'Documents' and all other folders were just an example. They never 'enforced' it. In earlier days people used to have separate drives for their documents because, Windows did get corrupted easily and when reinstalling only the 'C:' drive needs to be formatted. Only after Microsoft started selling pre-installed Windows through OEMs, they were able to change this trend.

Windows is also pushing in this same direction. Limiting users to the surface, because the two categories I mentioned don't really 'maintain' their system. Just like in the case of a car, some people like to maintain their own car, and many others let paid services to take care of it. But when it comes to 'personal' computers, with 'personal' files, a 'paid' service is not an option. So this lands on the shoulders of the OS companies as an opportunity. Whoever gives a better solution people will adopt it more.

Microsoft is going to land in many contradictions soon, because of their early widespread adoption of AI. Their net zero global emission target is a straightforward example of this.

Its been like this for a long time. I still find it difficult to access raw.github. the reversal is not proper as far as I can say.

Edit: checked now, still can't.

git commit --amend --no-edit

This helped me countless times...

It uses the Bing API for results. DDG is just a front end for Bing.

TBH I felt this is something they made up once it got more attention. If they had felt remorse, they might have come back to apologise or correct their mistake, sometime in the past two weeks I guess.

Who knows maybe they are really ill. Maybe they just made everything up.

RustDesk

If you have used AnyDesk in the past, this gives the same experience. Recently used it and has a lot of features, including unattended access.

They recommend self hosting an instance for better performance.

Even though I don't completely support what the other person said, the defense you are making here is dangerous. It's not gatekeeping or anything like elitism, which is the argument of the other person. I don't see the point of arguing with them regarding it.

So here you said 'biting more than you can chew'. The fundamental problem I see here, which is something people say about Linux also, is that the entry barrier is pretty high. Most of the time it stems from lack of easy to access documentation in the case of Linux. But when it comes to some specific projects, the documentation is incomplete. Many of the self hostable applications suffer from this.

People should be able to learn their way to chew bigger things. That is how one can improve. Most people won't enjoy a steep learning curve. Documentation helps to ease this steepness. Along with that I completely agree with the fact that many people who figure out things, won't share or contribute into the documentation.

My point is in such scenarios, I think we should encourage people to contribute into the project, instead of saying there are easier ways to do it. Then only an open source project can grow.

Hey sorry for the confusion. What I meant is Proxmos is considered as a bare metal hypervisor and Virt manager is a hypervisor inside an OS, right?

12 more...

Replacing a human with any form of tech has been a long standing practice. Usually in this scenario the profitability or the efficiency takes a known pattern. Unfortunately what you said is the exact way the market always operated in the past, and will be operating in the future.

The general pattern is a new tech is invented or a new opportunity is identified, then a bunch of companies get into the market as competing entities. They offer competing prices to customers in an attempt to gain market dominance.

But the problem starts when low profit drives some companies to a situation where either they have to go bust or dissolve the wing, or sell the company to a competitor. Usually after this point a dominant company will emerge in a market segment. Then the monopolies are created. After this point companies either increase the price or exploit customers to get more money, and thereby start making profits. This has been the exact pattern in tech industries for several decades.

In the case of AI also, this is why companies are racing to capture market dominance. Early adopters always get a small advantage and help them get prominence in the segment.

Offtopic question. Are you familiar with AVEVA (Wonderware) System Platform? We have been working on it for smart city command and control centre. The challenge is bringing in equipments from different OEMs under a single interface. Some projects want this to simplify operations. They prefer one application over many. What

Another thing is I saw you specified some OEMs. And most of the time they have some variant of an open protocol that only works with their PLC. Why does the SCADA/Automation not move into some universal standards? Is it because very few OEMs have significant market share? I am aware of OPC UA. But no universal support.

Recently they officially added a module to censor stuff on an individual instance basis...

Hmm... Machine learning on a dataset with images?

Well that reminded me of the latest stand up by Neil Brennan on Netflix.. Don't want to spoil..

Well.. I think you are putting too much expectation on a common person. I'm pretty sure a lot of people are going to be 'mind blown', by the ability of the new Recall feature. They will hail it as a technological marvel. Very few people care about privacy, and even in that, very few people really understand how they can have some privacy. Complete privacy is near to impossible.

Actually you can.... I do that with my setup. Just point your domain to the new ip assigned by tailscale to your server. Thats all. Recently they started supporting the https certificate also.. Even though it's not needed, for internal only communication.

Aah.. Isn't that what called a bare metal OS?

14 more...

There is no limit implemented, but it constantly failed to get an 8gb file to be transferred between two VMs. LocalSend is more reliable in my case.

Oh dear... I really thought I understood what bare metal means... But looks like this is beyond my tech comprehension

2 more...

Oh forgot to add that the last case that you mentioned, where multiple users sharing a PC, and keeping the folder in sync with all, is not straightforward. This needs another always-on (server like) device.

At least in Windows each user gets a different Syncthing ID. So if you sync the file with an always-on device, the other user can get the update when they come online from that.

Here one crucial element that needs to be highlighted is Syncthing is decentralised by design. I mean it is different from a server-client way of thinking. It is very much like how git stores content, if you are familiar with it.

For example, let us say I have 5 devices and there is a folder I want in all my devices kept in sync. Since there is no server, to propagate updates made in one device (let us call it Source Device) to other devices, it has to happen either directly, or indirectly. Here I'm assuming all 5 devices are configured to communicate with each other directly.

Whenever one of the other 4 devices (Device 1) becomes 'online' at the same time as of Source Device, the sync will happen. This is the direct way. The indirect one is, let us say after the sync happened between Source Device and Device 1, the Source Device goes 'offline', but the Device 1 continues to be 'online', now if Device 2 comes online, the change will be propagated from Device 1 to Device 2.

Note that the assumption, one device configired with all other devices is not the case, propagation of change has to take a path that similar to indirect way, even if all the devices are simultaneously online.

This configuration, where each device is configured to communicate with all other, is a pain to maintain, since Syncthing is not designed like a publish-subscribe model. What people usually do is, an always-on device (usually a server) is used as one of the devices that need to be kept in sync. Again, this is not a client server model, but each device is a 'node', and the always-on device is also just another node.

As you already experienced, it is very easy to get sync conflicts, if a folder is shared between multiple users, because of this decentralised design. In my opinion Syncthing works best for a single user. My use cases are, syncing my notes between pc and mobile, sync files scanned with the mobile to my pc, etc.

If your case is more focused on multiple users, WebDAV server can be an option. But again it's not straight forward and may not cover all use cases. Depending on what you are trying to achieve, a tool more suitable might be available. For example, if the aim is collaborative development there is Iroh (Still in early stages of development)

I hope this helps.

4 more...

IP should not cause any issues. IDs are just a hash of certificate used by Syncthing. Can you elaborate a little on the current setup? Device, OS, User, etc. Also if possible can you explain your use case? As I mentioned, Syncthing is very specific to what it can do, so it may not be the best solution for your case.

2 more...

One thing you can try out, if you haven't done already, is configuring 2 different ports for the two users here. GUI has an option to adjust the ports, also you can configure two different services to start depending on the logged in user. I haven't done it myself on Linux, but it looks like people had success. One R*ddit thread for example,

Syncthing on a multi user Computer

Butterfly

Try this. Still under active development, but I like it so far.

The site is Sansec. They uncovered it. They also specify how the malware redirects users to sports betting sites.

What is the difference between Virtual Machine Manager and Proxmos?

21 more...

I had to order a charger from another country since the model for our country has been out of stock for months.

I have been using something very similar to this. In my team I insisted on people without any git experience working on a separate local branch, than the feature branch

. To ensure screw ups are minimal, we pull and create a local feature branch and then a new local only dummy branch, on top of it. Once the team is more comfortable with git, I am planning to treat the local feature branch as a dummy branch.

So far things have been pretty neat. Spaghetti is no more with minimal conflicts.

The exact setup can be achieved by tailscale, a not really known feature is you can point your domain to the, tailscale IP (new ip assigned by tailscale), and it will act just like a normal hosting setup.

Advantage, any device or someone who you do not pre approve can't see anything if they go to the domain and subdomain. They only work if you are connected and authenticated to tailscale network. I have a similar setup, if you need more pointers please ping me.

1 more...

The song has been sung...

I actually use Nginx. The major advantage is if you have to access something directly. For example a client app in your device wants to access a service you host. In that case Heimdall won't be enough. You can still use ip with port, but I prefer subdomains. I use Nginx Proxy Manager to manage everything.

Regarding the network going down, the proprietary part of the tailscale is the coordination server. There is an open source implementation of the same, called headscale. If you are okay with managing your own thing, this is an alternative. Obviously the convenience will be affected.

Apart from that, if you haven't already read this blog post on How tailscale works? I highly recommend reading this. It gives a really good introduction to the infrastructure. Summary is your connections are P2P, using wireguard. I don't think tailscale will have a failure scenario that easily.

I hope this helps.

Come on.. don't be so pessimistic!!

Can you share the current settings you used? For me a good starting point is a preset with target resolution, and then fine tuning RF value to get a good reduction in size.

It's the github username

If tailscale will suite your need, but facing amy limitations on free plan, an alternative is Innernet https://github.com/tonarino/innernet

Obviously not as user friendly as tailscale and you have to work things out in the command line, just posting as an alternative. That's all.

Well this thread clearly established that I neither have technical knowledge and I don't pay attention to spelling...

Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I'm just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.

4 more...

Well this can be a starting point of a rabbit hole. Time to spend hours reading stuff that I don't really understand.

What's the licensing part you mentioned? Can you elaborate a little?