L_Acacia

@L_Acacia@lemmy.one
0 Post – 44 Comments
Joined 1 years ago

they released a search engine where the model reads the first link before trying to answer your request

You are easier to track with Adnauseum

This is because librewolf reports itself as firefox for privacy, and vivaldi does the same thing with chrome. Their is no vivaldi string in their user agent.

JPEG-XL support is being tested in firefox nightly

I put zorin on my parent's computer 2 years ago, while its a great distro, their windows app support is just marketing, its an out of date wine version with an unmaintained launcher. Worse than tinkering with wine yourself.

1 more...

Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.

https://tiz-cycling-live.io/livestream.php

Be sure to use an adblocker, some times the stream get taken down and you have to wait 1/2 min for them to repost one.

Windows is not fine with ARM, which can be a turnoff for some.

4 more...

It is already here, half of the article thumbnails are already AI generated.

2 more...

Being able to run benchmarks doesn't make it is a great experience to use unfortunately. 3/4 of applications don't run or have bugs that the devs don't want to fix.

I use similar feature on discord quite extensively (custom emote/sticker) and i don't feel they are just a novelty. Allows us to have inside joke / custom reaction to specific event and I really miss it when trying out open source alternatives.

The best way to run a Llama model locally is using Text generation web UI, the model will most likely be quantized to 4/5bit GGML / GPTQ today, which will make it possible to run on a "normal" computer.

Phind might make it accessible on their website soon, but it doesn't seem to be the case yet.

EDIT : Quantized version are available thanks to TheBloke

Specifically probleme sovling, chatgpt has multiple model too it is just hidden to the user

If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/

The training doesn't use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

1 more...

It does not work exactly like obsidian as it is an outliner. I use both on the same vault and logseq is slower on larger vault.

You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

Whatsapp is europe's iMessage

the setting used for vivaldi on this test are worse than the default one, pretty bad source as the author tries to make brave look as good as possible

Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.

It works pretty well. You can create a good dataset for a fraction of the effort and price it would have required to do it by hand. The quality is similar. You just have to review each prompt so you don't train your model on bad data.

Yes, but it will take some learning time

1 more...

Look at the Saudi, China or the UAE, it's still a pretty efficient way to boost your economy. People don't need to be consumer if this isn't what your country needs.

6 more...

Llama 2 now uses a license that allows for commercial use.

1 more...

I think that for most people linux is the most simple OS to use, switched my parents and sister computer to Linux Mint and they don't ask me to help them with windows changing their browser or moving their icons every two weeks. Though if you are trying to do anything more than web browsing, document editing and listening to music, you will have to learn how some of the os works.

Some ips are shadowbanned, if you are using a VPN/proxy it might be the reason.

Around 48gb of VRAM if you want to run it in 4bits

They know the tech is not good enough, they just dont care and want to maximise profit.

Buying second hand 3090/7090xtx will be cheaper for better performances if you are not building the rest of the machine.

They is no chance they are the one training it. It costs hundreds of millions to get a descent model. Seems like they will be using mistral, who have scrapped pretty much 100% of the web to use as training data.

3 more...

Duckduckgo results are pretty bad in my experience, brave search and startpage are way better.

You are limited by bandwidth not compute with llm, so accelerator won't change the interferance tp/s

It works with plugin juste like obsidian, so if their implémentation is not gold enough, you can always find a gramarly plugin.

Mistral modèles don't have much filter don't worry lmao

They have a github where you can see all the changes that are being made. https://github.com/privacyguides/privacyguides.org/releases

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.

Google uses their own chip for AI

1 more...

llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances

Even dumber than that, when their activation method fail, the support uses massgrev to install windows on costumer pc

2/3 of the people living in the Saudi Emirate are immigrants whose passports have been confiscated, they work in factory, construction sites, oil pit, and all other kind of manual jobs. Meanwhile the Saudi citizens occupy all the well paid job that require education, immigrants can't apply to those. If they didn't use forced labor, there simply wouldn't be enough people in the country to occupy all the jobs. Their economy could not be as good as it is right now.