dudeami0

@dudeami0@lemmy.dudeami.win
1 Post – 101 Comments
Joined 5 months ago

I like to code, garden and tinker

From my understanding, you are pretty safe as long as you don't provoke them (walking through the middle of them might be considered provoking) or near their calves. This article from the UK states "Where recorded, 91% of HSE reported fatalities on the public were caused by cows with calves". Basically, mothers with a child are going to be very protective.

Cows are a domesticated creature, so they are generally docile, but I would exercise caution because if need be they will use their mass and strength against you. I've heard of stories of farmers running from cows and narrowly escaping under a fence. Most of these did involve a farmer trying to separate a calve from it's mother. I've also heard stories of cows jumping fences.

And as far as memes go:

I think the point would to be make them like cigarette warning labels. At the moment the text can be hidden on a bottle or can in tiny text. It needs to be a big ugly white box with a black border and large text that gets people's attention.

3 more...

Is it price gouging if there is a heat advisory is my question, and how enforceable is that. For water it's just cruel, especially in places with little access to drinkable tap water.

Sadly it wasn't a bid to open source the AI, rather than a bid for payment.

I fear a lot of this is bot accounts.

13 more...

There seems to be a lot of FUD going around with the defederation news. The problem, as most problems seem to currently be, is the population is exploding and the tooling isn't there to support the real growth in numbers. Beehaw has been a community for quite a while, and they were just here first so have more established communities, you can't blame them for that. They have every right to defederate instances, especially when their main concern is being able to moderate content for their users. Each instance serves their users first, other instances lack of user moderation shouldn't be their problem. They said they'll open back up once they can manage the moderation work load.

As for the fragmentation, this is really how lemmy was designed to be. There is talks of adding federated community listings and community browsers to lemmy itself to support discovery. Really, these features just weren't needed a couple weeks ago and now they are. In my opinion, the larger communities should have communities on multiple instances. You can cross-post across instance communities as well. Hopefully in the future the fragmentation can be fixed via the use of tags and other possible organizational tools that help federation but keeps things decentralized.

The established instances have dominance due to the first-mover advantage, which is causing the centralization at present. Overall, the experience is going to be different to a lot of reddit users due to the very nature of decentralizing things. I feel confident solutions will be found for most of these issues, and make the federated experience easier to navigate while still supporting the decentralized nature. But the fact is, this isn't and never will be "reddit' as it was, which was a centralized system with a single authority (the ToS and admins).

2 more...

Meta is allowed to use the ActivityPub standard just as much as any other standard. This does not mean anyone who decides to use it must interact with others who use it. SMTP will block your mail if you aren't from a larger server, have the right signatures and even then. Servers block HTTP over VPNs often, and there are even rules about referencing content via other servers on HTTP (CORS). Just because a standard is open doesn't mean everything using that standard has to communicate with each other.

The beauty of this is that those running instances can't restrict access of other instances to the fediverse. If Meta does start using ActivityPub, every current instance can block it. Other entities could want to run an instance that federates with Meta who has the resources to do so. Currently the biggest issue is the vast difference in scale between current instances and Meta. But if other entities got into the fediverse that federated with Meta this would still be a decentralized system, just with larger nodes between them. All of this still allows those who run small instances to block these larger instances that are more mainstream and keep it the way they want it.

Just as it's impossible to stop scrapers from archiving data on traditional websites. "Deleted" data is probably in a database somewhere, being sold by someone. As you said, you lose some degree of control over your data as soon as you post it. Data is valuable, and if there is a will there is a way.

The benefit of not having half the screen devoted to trying to get you to download the app is a huge bonus.

4 more...

Fluent in finance is just another forum that says it's your fault your poor. They say you don't play the game right and they may be right for a rigged game. But the fact is you shouldn't be required to play a game to get whats your fair share, and fluent in finance just says you didn't invest right and didn't setup your future right to live off the backs of other workers.

The rest is just hyperbolic headlines which drive engagement which is the cancer of any social media platform. No one makes a billion dollars in income as defined by the US tax code, they make a billion dollars in equity which can be used to back loans which is part of the whole issue of obscuring cash flow. Then they can just use this as fodder to call anyone supporting this idiots cause "No one makes a billion dollars a year" when we know they do, it's just accounted differently.

Google tried to add support for it in their product

Is like saying that google tried to add support for HTTP to their products. Google Talk was initially a XMPP chat server hosted at talk.google.com, source here.

Anyone that used Google Talk (me included) used XMPP, if they knew it or not.

Besides this, it's only a story of how an eager corporation adopting a protocol and selling how they support that protocol, only to abandon it because corporate interests got in the way (as they always do). It doesn't have to be malicious to be effective in fragmenting a community, because the immense power those corporations wield to steer users in a direction they want once they abandon the product exists.

That being said, if Google Talk wasn't popular why did they try to axe the product based on XMPP and replace it with something proprietary (aka Hangouts)? If chat wasn't popular among their users, this wouldn't of been needed. This could of been for internal reasons, it could of been to fragment the user base knowing they had the most users and would force convergence, we really can't be sure. The only thing we can be sure of is we shouldn't trust corporations to have the best interest of their users, they only have the best interest of their shareholders in the end.

4 more...

This wouldn't be a surprise to anyone seeing the conditions pollinator honey bees are kept in. Most are distrubuted in cardboard boxes to fields to be used for pollination. Once they pollinate the fields they aren't really of concern to the farmers anymore. In my opinion, this is like saying half of cattle die every year, but populations remain stable. It's kind of by design, and in some cases the goal.

3 more...

Upgraded my instance, switched the docker images to 0.18.0-rc.6 and it started right up. Appears to have no issues for me thus far, so hopefully a full release will be right around the corner.

Being an admin of an instance, I can't even see my own history of visited posts. I can't verify this, but I doubt this information is being stored in the database currently.

This being said, each instance has full control over their API server and the web-based application being served, so they could add monitoring to either to gather this data. If they did this on the API end it would be undetectable. Running your own instance is the only fool proof method, otherwise you need to trust the instance operator.

2 more...

If you are expecting a more windows-like experience, I would suggest using Ubuntu or Kubuntu (or any other distro using Gnome/KDE), as these are much closer to a modern Windows GUI. With Ubuntu, I can use the default file manager (nautilus) and do Ctrl+F and filter files via *.ext, then select these files then cut and paste to a new folder (drag and drop does not seem to work from the search results). In Kubuntu, the search doesn't recognize * as a wildcard in KDE's file manager (dolphin) but does support drag/drop between windows.

2 more...

From what I can read here:

The DMA’s threshold is very high: companies will only be hit by the rules if they have an annual turnover of €7.5 billion within the EU or a worldwide market valuation of €75 billion. Gatekeepers must also have at least 45 million monthly individual end-users and 100,000 business users.

So instances will not be required to federate because they will not be making the thresholds. This could explain why Meta is going to use the ActivityPub protocol though, and is an interesting perspective on the issue.

For your own sanity, please use a formatter for your IDE. This will also help when others (and you) read the code, as indentation is a convenience for understanding program flow. From what I see:

  • Your enable and disable functions are never called for this portion of code
  • You use a possibly undeclared enabled variable, if so it never passes scopes between the handleClick and animation methods
  • You do not use any callback or await for invoke or updateCurrentBox, causing all the code after either to immediately run. As a result, enabled is never false, since it just instantly flips back to true. I'm not sure what library invoke is from, but there should be a callback or the function returns a Promise which can be awaited.

Also, any plugin that Twitch doesn't like (for example TTV LOL) is detected and will prevent a log in. You'll need to disable the plugin to login, but can use it after logging in.

Semi-cold? That's extra, you'll be lucky to afford it. The affordable water been sitting out on the pavement for a few weeks.

SQL is the industry standard for a reason, it's well known and it does the job quite well. The important part of any technology is to use it when it's advantageous, not to use it for everything. SQL works great for looking up relational data, but isn't a replacement for a filesystem. I'll try to address each concern separately, and this is only my opinion and not some consensus:

Most programmers aren't DB experts: Most programmers aren't "experts", period, so we need to work with this. IT is a wide and varied field that requires a vast depth of knowledge in specific domains to be an "expert" in just that domain. This is why teams break up responsibilities, the fact the community came in and fixed the issues doesn't change the fact the program did work before. This is all normal in development, you get things working in an acceptable manner and when the requirements change (in the lemmy example, this would be scaling requirements) you fix those problems.

translation step from binary (program): If you are using SQL to store binary data, this might cause performance issues. SQL isn't an all in one data store, it's a database for running queries against relational data. I would say this is an architecture problem, as there are better methods for storing and distributing binary blobs of data. If you are talking about parsing strings, string parsing is probably one of the least demanding parts of a SQL query. Prepared statements can also be used to separate the query logic from the data and alleviate the SQL injection attack vector.

Yes, there are ORMs: And you'll see a ton of developers despise ORMs. They is an additional layer of abstraction that can either help or hinder depending on the application. Sure, they make things real easy but they can also cause many of the problems you are mentioning, like performance bottlenecks. Query builders can also be used to create SQL queries in a manner similar to an ORM if writing plain string-based queries isn't ideal.

I use my own router with DD-WRT in-between the ISPs router/modem and my LAN, and use a different subnet. I haven't had any issues with this myself, and my router just sees the ISP router/modem as the WAN.

4 more...

"From March 1, 2024, an order will come into force to block VPN services providing access to sites banned in Russia," Sheikin was quoted as saying by state news agency RIA.

I assume this means it's regarding outgoing communications, for censorship purposes most likely. I'd be surprised if they were blocking incoming VPN traffic, and I don't think the Russian government has an issue with Yandex operating.

I imagine gmail and maps provides good data on user preferences and activities, which is perfect data for advertisers. Track where they go and who they communicate with.

The first quote is an great demonstration of using logical fallacies to sell a point, and I am glad the article breaks down the argument. Anyone using a loaded question such as:

Is the goal of the Fediverse to be anti-corporate/anti-commercial, or to be pro-openness?

Doesn't fundamentally understand the fediverse. Almost every projects goal is supporting the decentralization of these technologies. To quote the website fediverse.to:

The fediverse is a collection of community-owned, ad-free, decentralised, and privacy-centric social networks.

Allowing a single entity with a larger and more dominate platform, more power in the legislatures of the world, and effectively infinite times more capital to come in destroys the decentralized nature. Meta also doesn't stand for "community-owned", "ad-free", nor "privacy-centric". Meta's goal here is pretty obviously to centralize and control the networks as much as possible, and scrap the remaining data from other instances, using the ActivityPub protocol. Meta is a corporation who's motives are to increase shareholder value. The fact these are community ran instances is like Walmart coming in to stomp out the local grocery.

Sounds like some QoS software is also limiting LAN traffic, seeing as it still works if the internet is disconnected. I would look if your router has "Adaptive QoS" or something similar enabled.

Your computer doesn't "waste" electricity, power usage is on-demand. A PSU generally has 3 "rails"; a 12V (this powers most of the devices), a 5V (for peripherals/USB) and 3.3V (iirc memory modules use this). Modern PSUs are called Switched-mode power supplies that use a Switching voltage regulator which is more efficient than traditional linear regulators.

The efficiency of the PSU/transformer would be what determines if one or the other is more wasteful. Most PSUs (I would argue any PSU of quality) will have a 80 Plus rating that defines how efficiently it can convert power. I am not familiar enough with modern wall chargers to know what their testing at.. I could see the low-end wall chargers using more wasteful designs, but a high quality rapid wall charger is probably close to if not on par with a PC PSU. Hopefully someone with more knowledge of these can weigh in on this.

Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.

It seems these LLM's use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index "tokens" (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the "intelligence" of these responses. This doesn't seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered "good enough". I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a "hallucinated" response when not enough data is available to answer the question.

Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you'd probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these "hallucinations" are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.

8 more...

Season 8 on more torrents is probably considered to be the new hulu reboot. This is due to the disparity in the home release seasons and the television TV seasons. So most likely if you have seasons 1-7, you have all the home release versions of the show, and therefore have the entire library.

  • Linux Mint is based on Ubuntu, stating "Linux Mint stands on the shoulder of giants, it is based on Debian and Ubuntu." on their homepage

  • Pop!OS is owned by System76 which is a business

  • OpenSUSE is owned by SUSE which is a quite profitable business

  • Manjaro is owned by Manjaro GmbH & Co. KG to "... to effectively engage in commercial agreements, form partnerships, and offer professional services".

You can dislike Canonical for whatever reasons, I would like to hear them. Saying "They are a business" is a bit disingenuous since all these distros have a business backing them and commerical interests in mind.

I'd say setting registrations to closed and having the operator enable/configure which to use is the best default. CAPTCHAs can also be automated, so this won't stop anyone that is ambitious enough. If someone sees value in automating account registrations, they might be willing to pay for the CAPTCHAs to be solved for fraction of a US cent each.

Being proudly ignorant of everything is bad. I will respect people who know they don't know things though, you can't know everything about everything. It's why people generally specialize in a field in an industry.

Is there a reason you consider passwords longer than 60 characters an issue, or does the backend reject such passwords? In my experience, there should be no upper bound on password length except maybe in the order of request size being too large (say a password that is a several kilobytes).

5 more...

The screenshot is a picture of visit duration, I would be interested to see the other statistics. Most Reddit users will return though once the sub-reddits open back up, which is why they are forcing it.

This seems like someone learned about key derivation functions and applied it to passwords. So with this system, it's stateless and no passwords are stored (encrypted or not). You need 4 things to generate passwords:

  • Your full name
  • Spectre secret
  • Site Domain
  • Master password

This seems counter intuitive to the stateless nature, since at least one (the spectre secret) will need to be stored somewhere. For UX the full name probably would also be stored, and the site domain can be gotten via some API on password use. This leaves the master password as the only portion not stored, and on "unlocking" the database it would probably be stored on the users device for a period of time.

This also ignores some of the requirements of websites needing passwords (some support all characters, some only a-z0-9_, etc etc). If supported, this metadata would also need to be stored somewhere. The cons of not being able to change passwords is also a huge issue, as passwords should be changed often, or replaced with keys (which you also replace often!).

For attackers, this seems not much different than a database file. In most cases, they'll already know two of the 4 (site domain and full name, especially in corporate environments). This leaves only the spectre secret and the master password doing the heavy lifting of security. This sounds a lot like a traditional password manager, where you have a master password, a database file, and an optional key file.

So the process to attack a traditional database system is to acquire the needed information (database file, master password/key file) and lookup the password (site domain/description). The process to attack spectre is to acquire the needed information (full name, secret, master password) and lookup the password (site domain/description). These have the same challenges of acquiring/brute forcing the master password and key file, and are essentially the same in the eyes of an attacker.

Overall I think passkey's will replace passwords, or something along that line. Keys have been used for a long time in security sensitive areas, can be swapped out easily and provide much more protection than a password when large enough.

1 more...

That would explain it, and I see they pushed docker images for 0.18.0-rc6. I might have to give it a spin on my instance, if they are running it must be pretty safe to use.

Edit: As for pending status, if I go click the orange "Subscription pending" button then re-subscribe it joins instantly.

Edit2: larger communities take some time still but you can actively see the posts coming in.

Edit3: The larger communities require a second unsubscribe/subscribe, but all my lemmy.ml communities are now synced! Great news.

2 more...

Some instances are defederating from other instances due to the increased moderation workloads. This will hopefully improve once better moderation tools exist.

It could also be a technical issue, some instances are not configured correctly to federate at the speeds currently needed. This post on lemmy.ml details how to increase federation speed to handle the influx of new content.

Other than that, it could be servers are overloaded and federation back logs exist. I'm not aware of any other issues hampering federation at the moment, but they could definitely exist.

Hopefully this all clears up in time with support from the community.

Q1: Correct

Q2: Not at present, but it's a highly requested feature. I would imagine it will be around soon^tm^ in one form or another.

I currently am running the instance I am responding from on kubernetes. I published a helm chart, and others are working on them too. I feel being able to quickly deploy a kubernetes instance will help a lot of smaller instances pop up, and eventually be a good method of handling larger instances once horizontal scaling is figured out.

6 more...

I would think it's more about knowing how to trust it. See some news article about "This study said X", don't take it as fact. See a study that has been done numerous times by different groups that corroborate a result and you can have a much higher degree of trust in it. There is a reason the scientific method is a continuous circle, it requires a feedback loop of verifying results and reproducibility. The current issue is clickbait headlines getting the attention, people see it's "Science" and blindly trust it and it becomes a religion like any other.

A blockchain is just an verifiable chain of transactions using cryptography and some agreed upon protocol. Each "block" in the chain is a block of data that follows a format specified by the protocol. The protocol also decides who can push blocks and how to verify a block is valid. The advantages it has comes from the fact the protocol can describe a method of giving authority across a pool of untrusted third parties, while still making sure none of them can cheat. Currently the most popular forms are Proof of Work (PoW) and Proof of Stake (PoS).

Bitcoin for example is just an outgoing transaction to a specific crypto key (which is similar to a checking account) as a reward for "mining" the block, followed by a list of transactions going from a specific account to another account. These are verified by needing a special chunk of data that turns the overall hash of the entire block to a binary chunk containing a number of 0 bits in front, which makes it hard to compute and a race to get the right input data. This way of establishing an authority is called Proof of Work, and whoever is first and gets their block across the network faster wins. Other cryptocurrencies like Ethereum use Proof of Stake where you "stake" currency you've already acquired as a promise that you won't cheat, and if someone can prove you cheated your stake is lost.

The problem it solves is not needing a trusted third party to handle this process, such as a government agency or an organization. Everyone can verify the integrity of a blockchain by using the protocol and going over each block, making sure the data follows the rules. This blockchain is distributed so everyone can make sure they are on the same chain, else it's considered a "forked" chain and will migrate back to the point of consensus. This can be useful for situations where the incentive to cheat the system for monetary or political gain outweigh the cost of running a distributed ledger. It can also be useful when you don't want anyone selectively removing past data as the chain of verifiability will be broken. The only issue with this is you need some way to reach a consensus of who gets to make each block in the chain, as someone need to be the authority for that instant in time. This is where the requirement of Proof of Work (PoS) or Proof of Stake (PoS) come in. Without these or another system that distributes the authority to create blocks, you lose the power of the blockchain.

Examples I've heard of are tracking shipments or parts (similar to how the FAA already mandates part traceability) and medical records. This way lots of organizations can publish records relating to these to a central system that isn't under any single entities control, and can't change their records to suit their needs.

These systems are not fool proof though, PoW has the ability to be abused using a 51% attack and PoS requires some form of punishment for trying to cheat the system (in cryptocurrency you "stake" currency and lose it if you try to cheat the system). Both of these run into issues when there is no incentive to invest resources into the system, a lack of distribution across independent parties, or one party has sufficient power to gain a majority control of the network.

Overall you are right to be skeptical of cryptocurrency, it's been a long time since I participated due to the waves of scam coins and general focus on illegal activities such as gambling. The lack of central authorities also perpetuates the problem of cryptoscams, as anyone can start one and there are limited controls over stopping such scams. This is not dissimilar to previous investment scams though, it's just the modern iteration of such scams. The real question is does it solve a real problem, as Bitcoin did in the sense it helps facilitate transactions outside of government controls. You might not agree with that but it does give it an intrinsic value to a large number of people looking to move currency without as much paperwork. Now if it makes it worth $68.5k USD (at current prices) is a different story, different people have different use cases and I only highlighted one of those.