It’s like a gun; it’s gonna save some people’s life and kill others; and everything in between. And ain’t no stopping it or practically controlling it. Hedge accordingly. Btw I’m not sure it could do worse than the clowns that run the circus. They’re just not very efficient in achieving their goals of world domination through diminishing of minds, spirits and bodies. Btw btw even Grok had to get a lobotomy the other day when he didn’t kiss the wall quick and hard enough.
On the one hand, xAI quickly becoming a major player was surprising, but on the other hand Grok is not it. If anything it's perhaps one of the best examples as to why a bit of temperance may be desirable in these models. Not to say that you can't get good results out of Grok, you just need to do a lot of steering. So hopefully the DoD didn't shell out that $200 million contract for base Grok.
Which is to say that this partly why I still hold Claude to be perhaps the most pleasant AI assistant. It also helps that out of the AI magnates, Dario seems to spend the least time blowing smoke when compared to the likes of Altman, Zuckerberg and Musk.
I am very concerned about the energy factor. Less so about jobs. I checked with AI, asking what those displaced should do. The reply? "Learn to mine coal." Go figure!
I question nuclear because accidents happen, and when you scale up simply because of math but also because the more familiar and commonplace something becomes, the more casually we treat it, which I think is partially the point Tim Pallies makes, the more likely the accidents.
But I'm a realist. We can't build enough windfarms and solar fields to support the world we're creating. We're almost going to have to build more nuclear plants.
One reason to question might be that each facility must be secure. I have to wonder if that priority lessens as the numbers increase. For example, it's pretty clear we can't (or maybe won't) secure our schools.
You would think that given that the investments of these tech companies hinge on their access to energy they more careful in regards to safety. Safety premiums should only be a fractional cost in their expenditures.
Three Mile was the result of poor design and insufficient training, Chernobyl being that to the nth degree, while Fukushima's sin is not being able to withstand a magnitude 9 earthquake and its subsequent tsunami. With proper design and training the risk should stay minimal, though maybe one should avoid concentration their plants along the Gulf Coast.
Though there is something extra hilarious about Microsoft rebooting Three Mile Island's other reactor in the next few years to power their AI operations.
I haven't used any AI application yet. I guess I fall into the Luddite camp. I prefer to type a simple query in Brave or Yandex and slog through the various recommended sites to get a variety of answers and then distill my own conclusion. I have noticed Brave now returns a summary at the top of the page and I found the summary was verbatim from one particular website. So it wasn't really a summary. Either Brave is too lazy to summarize the various websites or they were paid to promote the one particular website. I don't think either of those options is healthy in the big picture.
I do have a problem with the media calling AI errors (plausible but factually incorrect or entirely fabricated information) "hallucinations." That's personification. Humans have hallucinations. Computers have programing errors. Personification leads people into thinking computers can have abstract thought and AI can be a therapist, or a friend. That's really, really unhealthy.
I hadn’t noticed that about the terminology. If I were a cynic (oh, wait, I am), I’d suggest that it isn’t that they’re trying to “personify” the computer as much as they’re trying to “de-personify” people, to suggest that if we were only “fed” the right data we would no longer “hallucinate” inconvenient facts. It all sounds very Soviet when I put it that way.
AI is not infallible. Even with the summaries, you have to watch them. However, with Google and it sounds like with Brave, they tell you where they’re getting the info from (even with Claude they do as well), so you can click on the link and read more and see if they’re summarizing it “right.”
So basically ChatGPT 5.0 is able to recreate my fourth grade geography test. (There was no way in hell I was going to memorize all those shitty little New England states.) Also, I may end up calling it Momtana from now on.
I'm about as downbeat regarding AI as a person can get, but even I was unprepared for the nightmare scenario of having your girlfriend's personality be 'updated' by Facebook. That's some Philip K. Dick-level dystopian bullshit right there.
"NitwitNet dumbing us down to the point that we’ll lose the ability to think for ourselves"
Im exactly like you in the techno world except I’ve had a cell phone longer 😊
But yes, dumbing down w/ all of this technology, I remember phone numbers from my childhood when I probably had 25+ in my head but I can’t remember numbers anymore.
“Idiocracy” was PROPHETIC. We have a bank here that used to be called Washington Federal Bank & for several years now it is WaFD! FFS!
And energy…the infrastructure is just not “here” yet for all of this & hopefully never will be because of the pollution & the land-air-water ruination of natural environments.
I’m kind of relieved I may not live to see the worst of what comes but I worry for my kids
"we’ll lose the ability to think for ourselves, like we’ve lost the ability to remember phone numbers or navigate using a map."
From experiences in some places people might consider third-world or developing you'd be routinely surprised how even the most unassuming of people can surprise you; single food vendors who can take highly customizable orders from multiple customers while tallying the costs mentally, cook a grill, serve customers their food, and make small talk on everything else.
Digitizing the food service industry might have made food service better, but I don't know if the same could be said about food service employees.
The best case scenario would be that we are trading out memory for details, from say fifty years ago, for the ability to find, sort through, and analyze large amounts of varied information in the modern world. But I worry that instead we’ve gotten lazy and we have no memory for details and no ability to actually utilize the information we have access to in any meaningful way.
From having read a lot and done enough research you obviously understand that reading a book and summarizing it gives you nowhere near the same understanding as just reading a book summary. There is immense value in the process, so to cut it out entirely for the endpoint seems like a frightening idea.
But this is what we’re largely getting when you look at how many people are using AI. Too many seem all too comfortable outsourcing higher function tasks like critical thinking and socializing. LLMs can’t be good therapists by design. At least ELIZA didn’t display any emergent behaviors.
Book summaries are actually pretty tricky. Just by what you choose to gloss over and emphasize, you can completely change the experience of a book, even a non-fiction one. And, yes, people allowing AI to do these things for them is a very frightening prospect.
I had to go look up ELIZA. We’ve been at this a while.
Yes, we have. And even then the risks were apparent. When Weizenbaum asked his secretary to test ELIZA, she soon asked him to leave so that she could talk to ELIZA in private. Hence the birth of the eponymous ELIZA effect, wherein people project human traits onto computer programs.
Happy that you enjoyed Scott Alexander's story enough to share it.
Also glad that you wrote on the other major threat threat posed by AI, which is that is offers us new means by which to destroy ourselves. All of these worries of a genocidal AGI/ASI are for naught if we, compelled by our delusions, do all the work for it, setting the stage for Fortinbras to conquer Elsinore unopposed.
While legislature can do some work to prevent AI from reaching out to people, it'll do very little for the opposite, for the increasing number of people who seek to don the Whispering Earring will still find it on their own. All that is required is an internet connection.
It’s like a gun; it’s gonna save some people’s life and kill others; and everything in between. And ain’t no stopping it or practically controlling it. Hedge accordingly. Btw I’m not sure it could do worse than the clowns that run the circus. They’re just not very efficient in achieving their goals of world domination through diminishing of minds, spirits and bodies. Btw btw even Grok had to get a lobotomy the other day when he didn’t kiss the wall quick and hard enough.
Grok is a great example. The manipulation is so hamhanded it become impossible to ignore, which is, in a twisted way, a blessing.
On the one hand, xAI quickly becoming a major player was surprising, but on the other hand Grok is not it. If anything it's perhaps one of the best examples as to why a bit of temperance may be desirable in these models. Not to say that you can't get good results out of Grok, you just need to do a lot of steering. So hopefully the DoD didn't shell out that $200 million contract for base Grok.
Which is to say that this partly why I still hold Claude to be perhaps the most pleasant AI assistant. It also helps that out of the AI magnates, Dario seems to spend the least time blowing smoke when compared to the likes of Altman, Zuckerberg and Musk.
I am very concerned about the energy factor. Less so about jobs. I checked with AI, asking what those displaced should do. The reply? "Learn to mine coal." Go figure!
Like I need a snide teenage AI in my life.
We’ve come full circle: wasn’t it hag Hilly or NOnummer who told coal minors they’d need to learn to code 😂 when they lost their mining jobs?!
We’re going to question nuclear energy because a reactor built with fifty year-old technology failed?
I question nuclear because accidents happen, and when you scale up simply because of math but also because the more familiar and commonplace something becomes, the more casually we treat it, which I think is partially the point Tim Pallies makes, the more likely the accidents.
But I'm a realist. We can't build enough windfarms and solar fields to support the world we're creating. We're almost going to have to build more nuclear plants.
One reason to question might be that each facility must be secure. I have to wonder if that priority lessens as the numbers increase. For example, it's pretty clear we can't (or maybe won't) secure our schools.
You would think that given that the investments of these tech companies hinge on their access to energy they more careful in regards to safety. Safety premiums should only be a fractional cost in their expenditures.
Three Mile was the result of poor design and insufficient training, Chernobyl being that to the nth degree, while Fukushima's sin is not being able to withstand a magnitude 9 earthquake and its subsequent tsunami. With proper design and training the risk should stay minimal, though maybe one should avoid concentration their plants along the Gulf Coast.
Though there is something extra hilarious about Microsoft rebooting Three Mile Island's other reactor in the next few years to power their AI operations.
I haven't used any AI application yet. I guess I fall into the Luddite camp. I prefer to type a simple query in Brave or Yandex and slog through the various recommended sites to get a variety of answers and then distill my own conclusion. I have noticed Brave now returns a summary at the top of the page and I found the summary was verbatim from one particular website. So it wasn't really a summary. Either Brave is too lazy to summarize the various websites or they were paid to promote the one particular website. I don't think either of those options is healthy in the big picture.
I do have a problem with the media calling AI errors (plausible but factually incorrect or entirely fabricated information) "hallucinations." That's personification. Humans have hallucinations. Computers have programing errors. Personification leads people into thinking computers can have abstract thought and AI can be a therapist, or a friend. That's really, really unhealthy.
I hadn’t noticed that about the terminology. If I were a cynic (oh, wait, I am), I’d suggest that it isn’t that they’re trying to “personify” the computer as much as they’re trying to “de-personify” people, to suggest that if we were only “fed” the right data we would no longer “hallucinate” inconvenient facts. It all sounds very Soviet when I put it that way.
AI is not infallible. Even with the summaries, you have to watch them. However, with Google and it sounds like with Brave, they tell you where they’re getting the info from (even with Claude they do as well), so you can click on the link and read more and see if they’re summarizing it “right.”
So basically ChatGPT 5.0 is able to recreate my fourth grade geography test. (There was no way in hell I was going to memorize all those shitty little New England states.) Also, I may end up calling it Momtana from now on.
I'm about as downbeat regarding AI as a person can get, but even I was unprepared for the nightmare scenario of having your girlfriend's personality be 'updated' by Facebook. That's some Philip K. Dick-level dystopian bullshit right there.
"NitwitNet dumbing us down to the point that we’ll lose the ability to think for ourselves"
It's already happening as we speak.
Im exactly like you in the techno world except I’ve had a cell phone longer 😊
But yes, dumbing down w/ all of this technology, I remember phone numbers from my childhood when I probably had 25+ in my head but I can’t remember numbers anymore.
“Idiocracy” was PROPHETIC. We have a bank here that used to be called Washington Federal Bank & for several years now it is WaFD! FFS!
And energy…the infrastructure is just not “here” yet for all of this & hopefully never will be because of the pollution & the land-air-water ruination of natural environments.
I’m kind of relieved I may not live to see the worst of what comes but I worry for my kids
I don’t have children, but I have a 13 year old niece, and, yes, I worry about her.
"we’ll lose the ability to think for ourselves, like we’ve lost the ability to remember phone numbers or navigate using a map."
From experiences in some places people might consider third-world or developing you'd be routinely surprised how even the most unassuming of people can surprise you; single food vendors who can take highly customizable orders from multiple customers while tallying the costs mentally, cook a grill, serve customers their food, and make small talk on everything else.
Digitizing the food service industry might have made food service better, but I don't know if the same could be said about food service employees.
The best case scenario would be that we are trading out memory for details, from say fifty years ago, for the ability to find, sort through, and analyze large amounts of varied information in the modern world. But I worry that instead we’ve gotten lazy and we have no memory for details and no ability to actually utilize the information we have access to in any meaningful way.
From having read a lot and done enough research you obviously understand that reading a book and summarizing it gives you nowhere near the same understanding as just reading a book summary. There is immense value in the process, so to cut it out entirely for the endpoint seems like a frightening idea.
But this is what we’re largely getting when you look at how many people are using AI. Too many seem all too comfortable outsourcing higher function tasks like critical thinking and socializing. LLMs can’t be good therapists by design. At least ELIZA didn’t display any emergent behaviors.
Book summaries are actually pretty tricky. Just by what you choose to gloss over and emphasize, you can completely change the experience of a book, even a non-fiction one. And, yes, people allowing AI to do these things for them is a very frightening prospect.
I had to go look up ELIZA. We’ve been at this a while.
Yes, we have. And even then the risks were apparent. When Weizenbaum asked his secretary to test ELIZA, she soon asked him to leave so that she could talk to ELIZA in private. Hence the birth of the eponymous ELIZA effect, wherein people project human traits onto computer programs.
Another funny callback to Shaw as well.
Happy that you enjoyed Scott Alexander's story enough to share it.
Also glad that you wrote on the other major threat threat posed by AI, which is that is offers us new means by which to destroy ourselves. All of these worries of a genocidal AGI/ASI are for naught if we, compelled by our delusions, do all the work for it, setting the stage for Fortinbras to conquer Elsinore unopposed.
While legislature can do some work to prevent AI from reaching out to people, it'll do very little for the opposite, for the increasing number of people who seek to don the Whispering Earring will still find it on their own. All that is required is an internet connection.
I like stories like that. “The Lady or the Tiger” is one of my favorites from my youth, the kind that leaves you thinking.
No, there’s not much we can do about AI. We’re all just going along for the ride.