They are doing fucking what?!
They are doing fucking what?!
What if you modify the tracker, like change some letters? Could that mess up their system if many did it?
That would be the sensible approach, but some executive is propably throwing tantrum because of their injured pride. I will be surprised if they just comply.
Why would anyone want millions of users to come here? Everything that becomes popular or has potential to make a lot of money is always ruined eventually.
Police is already on its way to your location for speech containing anti-corporate sentiment, flagged by ai
this has me thinking, i might actually be interested in looking at ads if they had only completely random things, like literally anything that exists. At least i wouldnt be annoyed with them so much.
Once windows 10 becomes unuseable, I will switch to linux no matter what. Considering they are already testing the waters about subscription for OS, there is nothing they will stop for.
I wouldn't be surprised if they completely started preventing regular users from having administrator rights to their own computers and you would either have to buy more expensive licence or just contact some ms support ai and beg it to do what you need admin rights for. Most users most likely wouldn't even notice or care since you dont need administator account to do things majority of people use computers for.
And what do you do when someone is actually doing something malicious?
clarification edit: malicious people can easily pretend to be stupid and claim they have made a mistake when they do bad shit.
The one who deployed the ai to be there to decide whether to kill or not
Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.
Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.
But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized "ai uprising". And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.
Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.
And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.
That is like saying you cant punish gun for killing people
edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.
so its technically possible to run excel with excel