Rabbit R1 AI box revealed to just be an Android app

AnActOfCreation@programming.dev to Technology@lemmy.world – 854 points –
Rabbit R1 AI box revealed to just be an Android app
arstechnica.com
  • Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
219

You are viewing a single comment

I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.

https://github.com/blainemartin/R7_Shodan_Cloudflare_IP_Sync_Tool

Tell me again how LLMs are useless?

To be honest… that doesn’t sound like a heavy lift at all.

Dream of tech bosses everywhere. Pay an intermediate dev for average level senior output.

Intermediate? Nah, junior. They're cheaper after all.

But senior devs do a lot more than output code. Sometimes - like Bill Atkinson's famous -2000 line change to Quickdraw - their jobs involve a lot of complex logic and very little actual code output.

It's a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I'd be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it's never "I'd never thought of doing it that way!" levels of amazing.

You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

I think the reason they're useful for writing code is that there's a third party - the parser or compiler - that checks their work. I've used LLMs to write code as well, and it didn't always get me something that worked but I was easily able to catch the error.

Are they useless?

Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.

I don't agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I'd feel more able to be creative, not less.

1 more...
1 more...

But we never have proofs that it gives good code, that's convenient...

So you want me to go into one of my codebases, remember what came from copilot and then paste it here? Lol no

Of course you can't.

Just a couple of days ago

You already forgot, that's convenient, again.

Yeah you post your employer first, dumbass

All you want is something to belittle

You say it's magical but never post proof. That's all I need to think it's shit. No need to debate about it for hours. Come back when you entice us with something instead of the billion REST APIs that are useless but seem to give a hard on to all the AI bros out there.

3 more...
3 more...
3 more...
3 more...
3 more...
4 more...
4 more...

Who's going to tell them that "QA" just ran the code through the same AI model and it came back "Looks Good".

:-)

I don't think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.

It's no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus... To them, generative AI is nothing more than the free version of chatgpt from a year ago, they've not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

They aren't trying to have a conversation, they're trying to convince themselves that the things they don't understand are bad and they're making the right choice by not using it. They'll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don't understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won't be disruptive.

10 more...
13 more...
20 more...