31 Comments

User's avatar
Augustus's avatar

I think you can do many amazing things with AI, but it's still just a program. If you give it control over an area, it'll control that area, and if you let it decide what to do, you'll get unexpected results.

I don't see people giving programs the right to change what they do or how it's done - so far that's done under controlled conditions or not at all. Nobody's going to put an AI in charge of a refinery and say, 'Do whatever you want.' The industrial world doesn't work that way.

This reminds me of Jurassic Park - zookeepers have endless protocols to limit the freedom animals have, and the only way to make anything exciting happen is to toss all of them aside and pretend they don't exist. I loved that movie, but it didn't make a lot of sense.

People are very good at containing random shit. We have to be. That's one of the main uses of intelligence.

I'll add that I don't believe in strong AI. You can't program a computer to be conscious. We don't have any good theories about consciousness, or a lot of ideas about where it comes from.

It's very obviously not computational. You can program the sounds of rain, and pictures of rain, but you won't get wet.

Intelligent machines are a theoretical part of the singularity, but they are not equivalent to a period of infinitely fast technological progress. Having them would increase the rate of change, but I suspect they would mostly be used to optimize existing processes.

I can see no reason to think that an AI would have the instincts to tell it to do anything but what it was instructed to. When we make AIs, if we want them to have generalized drives outside their area, we'll have to program them. Nobody's going to program machines to take over the world. I doubt anyone will program them to step outside their designed function at all.

I admit that in the hands of the insane or power-hungry one might have problems. I don't think this is likely. Your statement that they're more accessible to evil people than nuclear weapons is interesting, but I think a brief study of those in the world who control terrible weapons will show that they're the worst and dumbest people the race has produced.

I think progress continues, and I hope for better and interesting things. I do not think we're falling into a chaotic singularity dominated by machines.

Good article, and thanks.

Expand full comment
Anish Potnis's avatar

I liked your article and I'm not saying I disagree with your point, but I didn't think the paragraphs "In fact, I see the massive size of the present day's GPT-3 [...] we know [the brain] seems to operate very differently from current neural networks" strengthened your post. It feels like you are acknowledging a potential counterargument, then sort of just casually deflecting the issue of current models not having hit brain-scale yet by saying the one-to-one synapse-weight relationship has not been proven, then just moving on as though by itself that statement is some kind of valid argument.

Expand full comment
29 more comments...

No posts