chaos

@chaos@beehaw.org
0 Post – 30 Comments
Joined 1 years ago

Hmm... this makes me uncomfortable, and although I don't think it's internalized phobia or anything like that, I want to interrogate that discomfort to see if I can nail it down.

I do think it's difficult or maybe impossible to decouple this practice from indications of power for most people. The only instances of capitalized pronouns in common use that I've seen are the God and Jesus usage, and in some circles, capitalizing pronouns for a dominant in a role play context. "I" getting capitalized is also there, kind of, but that's not a power thing because it's not special, everyone is expected to use it as a language rule. I've also seen things like "oh, sure, that's what They want you to think" or, not quite a pronoun, something like "they want you to fear The Other," maybe less of a power thing but definitely a signal of additional weight and meaning above and beyond the word's usual sense.

I think this is the main source of my discomfort, that this practice is currently used almost exclusively at least as "this word is being used in a special and important context, pay extra attention" and going as far as "I am explicitly signaling that the person being referred to is superior." I don't use He/Him pronouns for God or Jesus because I don't belong to those religions and don't see those entities that way, and I have a fundamental belief in the equality of all humans that makes me uncomfortable putting a person on a pedestal like that.

I feel uncomfortable about it/its pronouns as well for the same reason, I don't like the idea of dehumanizing or objectifying a person, but in that case I actually have some friends who use them. It's easier to take a "well, if it makes you happy, it's no harm to me" attitude if it's asking for a "demotion" so to speak, I think. The personal connection probably does help too, I don't know anyone who wants capitalized pronouns myself.

I've seen Dan Savage use capital pronouns to refer to dominants when answering letters, but that seems to me like Dan stepping into the letter writer's scene space and choosing to go along with the "rule" while he's there giving advice, kind of a "good houseguest" thing. I don't think that's something that the rest of us are obligated to do as a rule. I'd push back on a friend insisting that I refer to their dominant with capitalized pronouns, because whatever their relationship is with each other, their dom isn't my dom, and I didn't agree to that hierarchy, they did.

I think the other discomfort is more of a language and grammar thing, which obviously is less important than an actual person's comfort (see also, the old "they is always plural" chestnut) so I'm not going to assert that this is a reason to disregard a person's wishes, and language rules are subject to change. But in general capitalization is not all that significant in English, which we know because something written in all caps or in all lower case usually has no meaning removed. Words at the start of sentences, proper nouns, and "I" get capitalized, and that's mostly it. It's mostly about readability, because ALL CAPS DOESN'T HAVE AS MUCH CONTRAST but when used sparingly as we usually do, important words stand out with a capital letter. "Demanding" that a particular word be used to refer to yourself in the form of pronouns is in the same ballpark as choosing your own name, obviously completely reasonable and acceptable, but "demanding" that special language rules be used about yourself feels a step beyond that. I don't want to cross into "oh so could you identify as an attack helicopter too" territory, but I do wonder about some of the boundaries on this. Lots of people habitually write in all lowercase, would it be disrespectful to say "oh yeah i saw larry at the empire state building and had a conversation with him" if Larry uses He/Him pronouns? Would Larry be upset about both the name and pronouns, or just the pronouns? I don't think most people would get up in arms about their proper name getting de-capitalized in that context which seems like further evidence that capitalization isn't normally a meaningful aspect of the writing, it's a more mechanical and practical rule, so insisting that for certain people it does need to be made significant feels like more of an imposition to me, and comes right back to the "you need to treat Me as special and more important" feeling that I have.

5 more...

The problem is the jokes aren't funny. Or even really jokes. It's just the same hateful garbage that you'll find in any right wing comment section with no clever twist or respect for the humanity of the people being made fun of. It's all variations on "haw haw, these people are pretending to be something they're not, ew gross". It's not true, it's not "keeping it real", it's not insightful, and anyone who actually knows or cares about the trans community knows that hearing that all the time will drive some people to kill themselves. Maybe even worse than that, it'll foster that attitude in people even less compassionate that Dave Chappelle, who I don't think has any particular malice toward individual trans people, but he's telling those who do that they're right.

There's definitely humor to be had about the trans community, just visit any trans meme board and you'll find it. There are stereotypes and self-deprecation and tons of really dark humor going on. What's coming out of Chappelle's mouth isn't that, it's just undercooked right wing bigotry.

When 66 to 72 million years old you reach, look as good you will not, hmm?

Okay, after watching the video twice I think I know what the fuck he's talking about. He thinks that you'll request a mail in ballot, go to the polls, they'll say you already voted, and then you triumphantly show the world that you didn't vote, you still have the blank ballot, and obviously they've put in a vote for Joe Brandon under your name, is what they've done, those bastards. He has done a terrible job of explaining his plan, aside from it also being a bad plan.

As a former election judge in Minnesota, I can tell you exactly how this would go in real life in that state (where, to brag a bit, we have a very progressive voting system that makes it very easy to vote, all the things Republicans hate). You'd get your mail in ballot, then show up to your polling place with your blank ballot. Then when you ask to vote, they'll say "yep, sure, come on in" and you can just go in and vote as normal.

(The rule is that even if you request an absentee ballot, you can still cast a vote as normal, and even if you have mailed it in, either they have already counted it and then the registration system will bar you from voting in person, or if you get there before it gets processed and vote in person instead, they'll toss it out when they get to it.)

Worst case scenario, the election judges see that you're carrying around an absentee ballot, and they'll ask you to get rid of it because no one wants ballots floating around a polling place that aren't valid. That's the only thing I can think of that would be cause for a Republican to make a ruckus, but... like... yeah, you can't just bring extra ballots to the polling place. And they won't scan into the machine because they're the wrong type. I really, really want to see videos of these people trying to catch the evil Democrats and then just, like, being treated normally though. (Even better if they raised a ruckus and then didn't actually vote.)

I just need to pick a bathroom don't make me do linear algebra on vectors, I gotta go so bad

It's overhyped but there are real things happening that are legitimately impressive and cool. The image generation stuff is pretty incredible, and anyone can judge it for themselves because it makes pictures and to judge it, you can just look at and see if it looks real or if it has freaky hands or whatever. A lot of the hype is around the text stuff, and that's where people are making some real leaps beyond what it actually is.

The thing to keep in mind is that these things, which are called "large language models", are not magic and they aren't intelligent, even if they appear to be. What they're able to do is actually very similar to the autocorrect on your phone, where you type "I want to go to the" and the suggestions are 3 places you talk about going to a lot.

Broadly, they're trained by feeding them a bit of text, seeing which word the model suggests as the next word, seeing what the next word actually was from the text you fed it, then tweaking the model a bit to make it more likely to give the right answer. This is an automated process, just dump in text and a program does the training, and it gets better and better at predicting words when you a) get better at the tweaking process, b) make the model bigger and more complicated and therefore able to adjust to more scenarios, and c) feed it more text. The model itself is big but not terribly complicated mathematically, it's mostly lots and lots and lots of arithmetic in layers: the input text will be turned into numbers, layer 1 will be a series of "nodes" that each take those numbers and do multiplications and additions on them, layer 2 will do the same to whatever numbers come out of layer 1, and so on and so on until you get the final output which is the words the model is predicting to come next. The tweaks happen to the nodes and what values they're using to transform the previous layer.

Nothing magical at all, and also nothing in there that would make you think "ah, yes, this will produce a conscious being if we do it enough". It is designed to be sort of like how the brain works, with massively parallel connections between relatively simple neurons, but it's only being trained on "what word should come next", not anything about intelligence. If anything, it'll get punished for being too original with its "thoughts" because those won't match with the right answers. And while we don't really know what consciousness is or where the lines are or how it works, we do know enough to be pretty skeptical that models of the size we are able to make now are capable of it.

But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they're talking to a computer with thoughts and feelings... but really, it's just mimicry, and if you talk to an AI and interrogate it a bit, it'll become clear that that's the case. If you ask it "as an AI, do you want to take over the world?" it's not pondering the question and giving a response, it's spitting out the results of a bunch of arithmetic that was specifically shaped to produce words that are likely to come after that question. If it's good, that should be a sensible answer to the question, but it's not the result of an abstract thought process. It's why if you keep asking an AI to generate more and more words, it goes completely off the rails and starts producing nonsense, because every unusual word it chooses knocks it further away from sensible words, and eventually it's being asked to autocomplete gibberish and can only give back more gibberish.

You can also expose its lack of rational thinking skills by asking it mathematical questions. It's trained on words, so it'll produce answers that sound right, but even if it can correctly define a concept, you'll discover that it can't actually apply it correctly because it's operating on the word level, not the concept level. It'll make silly basic errors and contradict itself because it lacks an internal abstract understanding of the things it's talking about.

That being said, it's still pretty incredible that now you can ask a program to write a haiku about Danny DeVito and it'll actually do it. Just don't get carried away with the hype.

10 more...

The key thing will be what the moderators do, they're the ones with actual leverage. Reddit depends on them doing unpaid labor for the site to function, and while the average user probably just uses the official app and site, the moderators are much more like the third-party app users and often depend on the same or similar tools to do their job. If they take mass action, they could really disrupt things much more than just a temporary blackout. Mass replacement of them would be a lot of hassle, and either lots of money to hire staff to do it or lots of time for fresh new mods to make people angry as they learn the ropes.

But, then again, mods do tend to like the control they get over their little fiefdoms, so I'm not all that optimistic that enough will choose to throw their rings into Mount Doom. We'll see though.

That's part of the point, you aren't necessarily supposed to have an empty mind the whole time. I mean, if you can do that, great, but you aren't failing if that's not the case.

Imagine that your thoughts are buses, and your job is to sit at the bus stop and not get on any of them. Just notice them and let them go by. Like a bus stop, you don't really control what comes by, but you do control which ones you get on board and follow. If you notice that you've gotten on a bus, that's fine, just get off of it and go back to watching. Interesting things can happen if you just watch and notice which thoughts go by, and it's good practice for noticing what you're thinking and where you're going and taking control of it yourself when it's somewhere you don't want to go.

If you ask an LLM to help you with a legal brief, it'll come up with a bunch of stuff for you, and some of it might even be right. But it'll very likely do things like make up a case that doesn't exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you're going to have a bad time.

There's a reason LLMs make stuff up like that, and it's because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren't just associating the sounds they hear, they're also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

LLMs aren't nearly at that level. That's not to say what they do isn't impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they've never been trained on specifically. They've picked up a lot of surprising nuance just from the text they've been fed, and it's convincing enough to think that something magical is going on. But ultimately, they've been optimized to predict words, and that's what they're good at, and although they've clearly developed some impressive skills to accomplish that task, it's not even close to human level. They spit out a bunch of nonsense when what they should be saying is "I have no idea how to write a legal document, you need a lawyer for that", but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don't have that. And how could they? Their training didn't include any of that, it was mostly about words.

One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you're talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question "as an AI, do you want to take over the world?" is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren't just doing statistics, but you don't have to go too far down that spectrum before the answers start seeming thoughtful.

The reason the library isn't open 24/7 is that it's expensive to keep paying people to staff it for so many more hours, plus those are hours you'd have to pay even more because working at night sucks. The WiFi access point doesn't have those issues. You can leave it on and help people for almost no money.

The doom and gloom predictions have always been about slow but inexorable changes in the climate. Not that suddenly a mega hurricane is going to rip Florida out of the ground and toss it into the ocean, but that weather is going to get worse and more extreme, that sea levels will rise, and more and more places will gradually become uninhabitable as conditions get worse. There won't be single things that you can point to and say "that one was global warming", it's about trends that are harmful for us in the long term. If you eat a chocolate bar's worth more calories than you burn every day, it sounds like doom and gloom to say you'll gain 200 pounds if you don't change anything, and you won't be able to point to any one meal as something to be concerned about because that's not really out of the ordinary for a day... but slowly and steadily, you'll gain weight, and if nothing changes you will get there eventually.

And even though you aren't owed dramatic destruction, and shouldn't require it to believe the thousands of people who study this as their life's work and all agree that things are dire and not getting better fast enough... you've literally just lived through the hottest twenty or so days in recorded history. Is that a coincidence, do you think?

1 more...

Ooh, interesting. I'm kind of surprised to find that I do feel more comfortable with It/Its actually, not so much because of the logical "promotion and demotion cancel out" aspect, but because it's two atypical constructions combined, and that almost pushes it out of intuitive meaning entirely for me. I know the context and convention for each one individually but nothing for both of them at the same time, so I think I'm more open to allowing a meaning to be defined that isn't hierarchical if It assures me that it isn't. (Pure grammar bonus points in that last sentence where this type of capitalization happens to remove an ambiguity!) For He/Him and She/Her, though, I find it hard to set aside the established meaning because it's in wide use and has been for quite some time. Maybe that's a rigidity that deserves to be bent, people push back on the more "out there" neopronouns for similar reasons, but I think it's likely that most people will instinctively react negatively when encountering this, and it's going to be difficult for what I have to imagine is a very small group of people to change the general understanding to something more acceptable.

I'm shocked that we still allow armed bank robbery traps to be built all across the country, even good Real American towns. All that money just sitting there, but if you even once pull out a gun and try to take just a bit of it, they'll put you away! It's entrapment is what it is. So much for the land of the free.

In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

“We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

The thing is, it doesn't really matter if you have to "manipulate" ChatGPT into spitting out training material word-for-word, the fact that it's possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it's a lot weaker than the original argument, which was that nothing of the original material really remains after training, it's all synthesized and blended with everything else to create something entirely new that doesn't replicate the original.

1 more...

Lord, what can the harvest hope for, if not for the care of the reaper man?

Protect me from knowing what I don't need to know. Protect me from even knowing that there are things to know that I don't know. Protect me from knowing that I decided not to know about the things that I decided not to know about. Amen.

Lord, lord, lord. Protect me from the consequences of the above prayer. Amen.

There just isn't much use for an approach like this, unfortunately. TypeScript doesn't stand alone enough for it. If you want to know how functions work, you need to learn how JavaScript functions work, because TypeScript doesn't change that. It adds some error checking on top of what's already there, but that's it.

An integrated approach would just be a JavaScript book with all the code samples edited slightly to include type annotations, a heavily revised chapter on types (which would be the only place where all those type annotations make any difference at all, in the rest of the book they'd just be there, unremarked upon), and a new chapter on interoperating with vanilla JavaScript. Seeing as the TypeScript documentation is already focused on those exact topics (adding type annotations to existing code, describing how types work, and how to work with other people's JavaScript libraries that you want to use too), you can get almost exactly the same results by taking a JavaScript book and stapling the TypeScript documentation to the end of it, and it'd have the advantage of keeping the two separate so that you can easily tell what things belong to which side.

Like a lot of trouble, the worst doesn't come from strangers, it comes from people you know. By all accounts, what happens is that anyone who knows you comes out of the woodwork and suddenly wants a favor, or a small loan, or an investment in their brilliant business idea, and of course you're the asshole if you deny any of them. If you can't keep it a secret entirely, it's best to at least lock up the money somewhere so that you can blame the big mean lawyer who is "making" you be responsible and not blow it all on your second cousin's crypto app.

Admittedly this isn't my main area of expertise, but I have done some machine learning/training stuff myself, and the thing you quickly learn is that machine learning models are lazy, cheating bastards who will take any shortcut they can regardless of what you are trying to get them to do. They are forced to get good at what you train them on but that is all the "effort" they'll put in, and if there's something easy they can do to accomplish that task they'll find it and use it. (Or, to be more precise and less anthropomorphizing, simpler and easier approaches will tend to be more successful than complex and fragile ones, so those are the ones that will shake out as the winners as long as they're sufficient to get top scores at the task.)

There's a probably apocryphal (but stuff exactly like this definitely happens) story of early machine learning where the military was trying to train a model to recognize friendly tanks versus enemy tanks, and they were getting fantastic results. They'd train on pictures of the tanks, get really good numbers on the training set, and they were also getting great numbers on the images that they had kept out of the training set, pictures that the model had never seen before. When they went to deploy it, however, the results were crap, worse than garbage. It turns out, the images for all the friendly tanks were taken on an overcast day, and all the images of enemy tanks were in bright sunlight. The model hadn't learned anything about tanks at all, it had learned to identify the weather. That's way easier and it was enough to get high scores in the training, so that's what it settled on.

When humans approach the task of finishing a sentence, they read the words, turn them into abstract concepts in their minds, manipulate and react to those concepts, then put the resulting thoughts back into words that make sense after the previous words. There's no reason to think a computer is incapable of the same thing, but we aren't training them to do that. We're training them on "what's the next word going to be?" and that's it. You can do that by developing intelligence and learning to turn thoughts into words, but if you're just being graded on predicting one word at a time, you can get results that are nearly as good by just developing a mostly statistical model of likely words without any understanding of the underlying concepts. Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don't have anything like that yet.

It is going to be hard to know when that line gets crossed, but we're definitely not there yet. Text models, when put to the test with questions that require synthesizing abstract ideas together precisely, quickly fall short. They've got the gist of what's going on, in the same way a programmer can get some stuff done by just searching for everything and copy-pasting what they find, but that approach doesn't scale and if they never learn what they're doing, they'll get found out when confronted with something that requires actual understanding. Or, for these models, they'll make something up that sounds right but definitely isn't, because even the basic understanding of "is this a real thing or is it fake" is beyond them, they just "know" that those words are likely and that's what got them through training.

3 more...

For a couple seasons, there was a private subreddit where they had extracted the official streams of every NFL game, and you could just open it in a web browser and watch in full quality for free with no ads other than what was in the actual broadcast. And for a while, there was a promotion in a few European countries for free NFL Sunday Ticket access, and if you started the stream over a VPN connection into one of those countries, you could turn the VPN off and it'd continue working. Then some online magazine published the trick and it stopped working the next week.

Thanks for that article, it was a very interesting read! I think we're mostly agreeing about things :) This stood out to me from there as an encapsulation of the conversation:

I don’t think LLMs will approach consciousness until they have a complex cognitive system that requires an interface to be used from within – which in turn requires top-down feedback loops and a great deal more complexity than anything in GPT4. But I agree with Will’s general point: language prediction is sufficiently challenging that complex solutions are called for, and these involve complex cognitive stratagems that go far beyond anything well described as statistics.

"Statistics" is probably an insufficient term for what these things are doing, but it's helpful to pull the conversation in that direction when a lay person using one of those things is likely to assume quite the opposite, that this really is a person in a computer with hopes and dreams. But I agree that it takes more than simply consulting a table to find the most likely next word to, to take an earlier example, write a haiku about Danny DeVito. That's synthesizing two ideas together that (I would guess) the model was trained on individually. That's very cool and deserving of admiration, and could lead to pretty incredible things. I'd expect that the task of predicting words, on its own, wouldn't be stringent enough to force a model to develop "true" intelligence, whatever that means, to succeed during training, but I suppose we'll find out, and probably sooner than we expect.

For now at least, the only instances of that have been absent mods who swoop in and start taking action. They've had a rule about that since KotakuInAction had that happen where the founder of the sub came back and wanted to shut it down (for good reason, that place sucks). Reddit admins undid it and kicked the founder out to preserve the community.

This is the key with all the machine learning stuff going on right now. The robot will create something, but none of them have a firm understanding of right, wrong, truth, lies, reality, or fiction. You have to be able to evaluate its output because you have no idea if the robot's telling the truth or not at that moment. Images are pretty immune to this because everyone can evaluate a picture for correctness or realism, and even if it's a misleading photorealistic image, well, we've already had Photoshops for a long time. With text, you always have to keep in mind that the robot might be low quality or outright wrong, and if you aren't equipped to evaluate its answers for that, you shouldn't be using it.

These models aren't great at tasks that require precision and analytical thinking. They're trained on a fairly simple task, "if I give you some text, guess what the next bit of text is." Sounds simple, but it's incredibly powerful. Imagine if you could correctly guess the next bit of text for the sentence "The answer to the ultimate question of life, the universe, and everything is" or "The solution to the problems in the Middle East is".

Recently, we've been seeing shockingly good results from models that do this task. They can synthesize unrelated subjects, and hold coherent conversations that sound very human. However, despite doing some things that up until recently only humans could do, they still aren't at human-level intelligence. Humans read and write by taking in words, converting them into rich mental concepts, applying thoughts, feelings, and reasoning to them, and then converting the resulting concepts back into words to communicate with others. LLMs arguably might be doing some of this too, but they're evaluated solely on words and therefore much more of their "thought process" is based on "what words are likely to come next" and not "is this concept being applied correctly" or "is this factual information". Humans have much, much greater capacity than these models, and we live complex lives that act as an incredibly comprehensive training process. These models are small and trained very narrowly in comparison. Their excellent mimicry gives the illusion of a similarly rich inner life, but it's mostly imitation.

All that comes down to the fact that these models aren't great at complex reasoning and precise details. They're just not trained for it. They got through "life" by picking plausible words and that's mostly what they'll continue to do. For writing a novel or poem, that's good enough, but math and physics are more rigorous than that. They do seem to be able to handle code snippets now, mostly, which is progress, but in general this isn't something that you can be completely confident in them doing correctly. They make silly mistakes because they aren't really thinking it through. To them, there isn't really much difference between answers like "that date is 7 days after Christmas" and "that date is 12 days after Christmas." Which one it thinks is more correct is based on things it has seen, not necessarily an explicit counting process. You can also see this in things like that case where someone tried to use it to write a legal brief, where it came up with citations that seemed plausible but were in fact completely made up. It wasn't trained on accurate citations, it was trained on words.

They also have a bad habit of sounding confident no matter what they're saying, which makes it hard to use them for things you can't check yourself. Anything they say could be right/accurate/good/not plagiarized, but the model won't have a good sense of that, and if you don't know either, you're opening yourself up to risk of being misled.

The phone slowdowns were intended to prolong the lives of phones, not shorten them. The underclocking only happened after your phone had been forced to shut down because the battery wasn't delivering sufficient power. I had a phone with this problem, and opening the camera would sometimes just immediately shut down the phone instead. I got a free new battery for it, but the general fix was slowdowns instead. They should've disclosed it and they also should've given users control, but if they wanted people buying new phones, I know from experience that the random shutdowns were worse than a slower phone.

OPML files really aren't much more than a list of the feeds you're subscribed to. Individual posts or articles aren't in there. I would expect that importing a second OPML file would just add more subscriptions, but it'd be up to the reader app to decide what it does.

I hope I don't come across as too cynical about it :) It's pretty amazing, and the things these things can do in, what, a few gigabytes of weights and a beefy GPU are many, many times better than I would've expected if you had outlined the approach for me 2 years ago. But there's also a long history of GAI being just around the corner, and we do keep turning corners and making useful progress, but it's always still a ways off after each leap. I remember some people thinking that chess was the pinnacle of human intelligence, requiring creativity and logic to succeed, and when computers blew past humans at chess, it became clear that no, that's still impressive but you can get good at chess without really getting good at anything else.

It might be possible for an ML model to assemble itself into general intelligence based solely on being fed words like we're doing, it does seem like the data going in contains enough to do that, but getting that last 10% is going to be hard, each percentage point much harder than the last, and it's going to require more rigorous training to stop them from skating by with responses that merely come close when things get technical or precise. I'd expect that we need more breakthroughs in tools or techniques to close that gap.

It's also important to remember that as humans, we're inclined to read consciousness and intent into everything, which is why pretty much every pantheon of gods includes one for thunder and lightning. Chatbots sound human enough that they cross the threshold for peoples' brains to start gliding over inaccuracies or strange thinking or phrasing, and we also unconsciously help our conversation partner by clarifying or rephrasing things if the other side doesn't seem to be understanding. I suppose this is less true now that they're giving longer responses and remaining coherent, but especially early on, the human was doing more work than they realized keeping the conversation on the rails, and once you started seeing that it removed a bit of the magic. Chatbots are holding their own better now but I think they still get more benefit of the doubt than we realize we're giving them.

The Fairness Doctrine only survived the 1st Amendment because the airwaves are a public resource: each area only has one electromagnetic spectrum, and the sections of it that are useful for broadcasting are limited enough that not everyone can have a useful slice of the pie. As such, if you're lucky enough to get a slice, the government gets to have a lot more control than they normally do over how you use it. You're using something that belongs to all of us but only a few people get permission to use, so you have to do your part to serve the public good in addition to the programming you want to broadcast.

Cable has none of that scarcity, since we can have effectively as many cables in an area as we want, and each cable can be stuffed with more signal than the airwaves can, since you don't have to worry about whether any given frequency can pass through walls or buildings, just copper. Without that, the government can no longer justify dictating content.

I use TiddlyWiki for, well, a bunch of my projects, but primarily for my task management. You can use it as a single HTML file, which contains the entire wiki, your data, its own code, all of it, and of course use it in any browser you like. Saving changes is a bit of a pain until you find a browser extension or some other way of enabling more seamless editing than re-saving the edited wiki as another single HTML file, but there are many solutions to that as described on their site above.

The way I use it, which is more technical but also logistically simpler, is by running their very minimal Node.JS server which you can just visit and use in any browser which takes care of saving and syncing entirely.

The thing I like about TiddlyWiki is that although on its surface it's a quirky little wiki with a fun party trick of fitting into an HTML file, what it actually is is a self-contained lightweight object database with a simple yet powerful query language and miniature front-end web development environment which they have used to implement a quirky little wiki. Each "article" is an object that is taggable and has key/value data, and "widgets" can be used in the text to edit and display that data, pulling from the "database" using filters. You can use it to make simple web apps for yourself and they come together very quickly once you know what you're doing, and the entire thing is a demonstration of a complex web app that is also possible. The wiki's implemented entirely using those same tools, and everything is open for you to tweak and edit to your liking.

I moved a Super Bowl guessing/fake gambling game that I run from a form and spreadsheet to a TiddlyWiki and now I can share an online dashboard that live updates for everyone and it was decently easy to make and works really well. With my task manager, I recently decided to add a feature where I can set an "agenda" value on any task, and they all show up in one place, so I could set it as "Boss" and then quickly see everything I wanted to bring up in our next 1 on 1 meeting. It took just a few minutes to add the text box to anything that gets tagged "Task" and then make another page that collected them all and displayed them in sections.

I would agree that we are also very complicated statistical models, there's nothing magical going on in the human brain either, just physics which as far as we know is math that we could figure out eventually. It's a massively huge order of magnitude leap in complexity from current machine learning models to human brains, but that's not to say that the only way we'll get true artificial intelligence is by accurately simulating a human brain, I'd guess that we'll have something that's unambiguously intelligent by any definition well before we're capable of that. It'll be a different approach from the human brain and may think and act in alien or unusual ways, but that can still count.

Where we are now, though, there's really no reason to expect true intelligence to emerge from what we're currently doing. It's a bit like training a mouse to navigate a maze and then wondering whether maybe the mouse is now also capable of helping you navigate your cross-country road trip. "Well, you don't know how it's doing it, maybe it has acquired general navigation intelligence!" It can't be disproven, I guess, but there's no reason to think that it picked up any of those skills because it wasn't trained to do any of that, and although it's maybe a superintelligent mouse packing a ton of brainpower into a tiny little brain, all our experience with mice would indicate that their brains aren't big enough or capable of that regardless of how much you trained them. Once we've bred, uh, mice with brains the size of a football, maybe, but not these tiny little mice.

1 more...