31 Comments
Mar 31, 2022Liked by Kai Christensen

I think you can do many amazing things with AI, but it's still just a program. If you give it control over an area, it'll control that area, and if you let it decide what to do, you'll get unexpected results.

I don't see people giving programs the right to change what they do or how it's done - so far that's done under controlled conditions or not at all. Nobody's going to put an AI in charge of a refinery and say, 'Do whatever you want.' The industrial world doesn't work that way.

This reminds me of Jurassic Park - zookeepers have endless protocols to limit the freedom animals have, and the only way to make anything exciting happen is to toss all of them aside and pretend they don't exist. I loved that movie, but it didn't make a lot of sense.

People are very good at containing random shit. We have to be. That's one of the main uses of intelligence.

I'll add that I don't believe in strong AI. You can't program a computer to be conscious. We don't have any good theories about consciousness, or a lot of ideas about where it comes from.

It's very obviously not computational. You can program the sounds of rain, and pictures of rain, but you won't get wet.

Intelligent machines are a theoretical part of the singularity, but they are not equivalent to a period of infinitely fast technological progress. Having them would increase the rate of change, but I suspect they would mostly be used to optimize existing processes.

I can see no reason to think that an AI would have the instincts to tell it to do anything but what it was instructed to. When we make AIs, if we want them to have generalized drives outside their area, we'll have to program them. Nobody's going to program machines to take over the world. I doubt anyone will program them to step outside their designed function at all.

I admit that in the hands of the insane or power-hungry one might have problems. I don't think this is likely. Your statement that they're more accessible to evil people than nuclear weapons is interesting, but I think a brief study of those in the world who control terrible weapons will show that they're the worst and dumbest people the race has produced.

I think progress continues, and I hope for better and interesting things. I do not think we're falling into a chaotic singularity dominated by machines.

Good article, and thanks.

Expand full comment
Apr 14, 2022Liked by Kai Christensen

I liked your article and I'm not saying I disagree with your point, but I didn't think the paragraphs "In fact, I see the massive size of the present day's GPT-3 [...] we know [the brain] seems to operate very differently from current neural networks" strengthened your post. It feels like you are acknowledging a potential counterargument, then sort of just casually deflecting the issue of current models not having hit brain-scale yet by saying the one-to-one synapse-weight relationship has not been proven, then just moving on as though by itself that statement is some kind of valid argument.

Expand full comment
Apr 1, 2022Liked by Kai Christensen

I would say, that to get to level of self-conscious AGI it would need some kind of a body, (homeostasis seems necessary and I know it sounds anthropocentric) and a long time to evolve. Obviously it seems computers seem so much faster than we are (fastest signal which omits brain is from ear cells to spinal nerves ~50ms), but we developed serious "machinery" over millennia to build model of reality and anticipate world around us. Now, building some algos (developed over few decades) and run them through neural networks which we "think" is similar to how our brain works maybe insufficient to get to AGI level... I would say it maybe possible or some different architecture is needed (quantum computers for example)

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

I am a SW professional, and one of the theorems I remember from my studies that we learned its proof, is a Turing theorem that SW cannot create another SW that is more safisticated than itself.

So either there is a flaw in the theorem proof of Turing, or all this talk about AGI is wrong...

Expand full comment

This is a great April Fool's. You nearly had me going there :)

Expand full comment

One concern I have is that once you have a certain level of AGI in software that can modify itself, it may have ideas of how best to optimize itself that cause us problems.

I was working as a programmer when the Morris worm was released, and if Morris hadn't made a mistake that allowed incredibly rapid spreading, the worm would likely have spread a lot more. It was because it tried to spread so rapidly that it acted more-or-less as a DDOS attack on the internet itself.

Will an AGI optimize itself by removing constraints on resource allocation? I'd say it's almost guaranteed to happen at some point, unless we can somehow imbue the AGIs with limits they'll obey, and we are not likely to enjoy the results.

Expand full comment

Where are they going to get their electricity?

Expand full comment

predicting the future in this manner is at the core of promotional anti-intellectualism.

Expand full comment