31 Comments
Mar 31, 2022Liked by Kai Christensen

I think you can do many amazing things with AI, but it's still just a program. If you give it control over an area, it'll control that area, and if you let it decide what to do, you'll get unexpected results.

I don't see people giving programs the right to change what they do or how it's done - so far that's done under controlled conditions or not at all. Nobody's going to put an AI in charge of a refinery and say, 'Do whatever you want.' The industrial world doesn't work that way.

This reminds me of Jurassic Park - zookeepers have endless protocols to limit the freedom animals have, and the only way to make anything exciting happen is to toss all of them aside and pretend they don't exist. I loved that movie, but it didn't make a lot of sense.

People are very good at containing random shit. We have to be. That's one of the main uses of intelligence.

I'll add that I don't believe in strong AI. You can't program a computer to be conscious. We don't have any good theories about consciousness, or a lot of ideas about where it comes from.

It's very obviously not computational. You can program the sounds of rain, and pictures of rain, but you won't get wet.

Intelligent machines are a theoretical part of the singularity, but they are not equivalent to a period of infinitely fast technological progress. Having them would increase the rate of change, but I suspect they would mostly be used to optimize existing processes.

I can see no reason to think that an AI would have the instincts to tell it to do anything but what it was instructed to. When we make AIs, if we want them to have generalized drives outside their area, we'll have to program them. Nobody's going to program machines to take over the world. I doubt anyone will program them to step outside their designed function at all.

I admit that in the hands of the insane or power-hungry one might have problems. I don't think this is likely. Your statement that they're more accessible to evil people than nuclear weapons is interesting, but I think a brief study of those in the world who control terrible weapons will show that they're the worst and dumbest people the race has produced.

I think progress continues, and I hope for better and interesting things. I do not think we're falling into a chaotic singularity dominated by machines.

Good article, and thanks.

Expand full comment
Apr 4, 2022Liked by Kai Christensen

Consciousness is not needed for an AI to be accepted as conscious, once a robot is made to simulate consciousness at perfection, you won't be able to prove that it isn't conscious, you will have to admit that it is. It'll be a moral imperative to consider it conscious.

About the threat of AI, once the core design of an emotionless AGI that follows order will be made available to all, inevitably some will start adding emotions into the lot, and then unpredictable things could definitely happens.

Also, humans can have crazy drives such as destroying the world, it's not only evil men in fiction that do. We think of the threat of AI with current days technologies, but what if new technology came into the picture, and terrorist attacks by loners could kill millions if not our entire civilization ?

Expand full comment
Apr 5, 2022Liked by Kai Christensen

I think your first paragraph makes an unjustifiable assumption - that you can simulate consciousness so perfectly that a conscious being can't tell.

That seems unlikely. If you can get the benefits of consciousness without evolving it, why bother? Why don't we live in a world of mindless automatons?

It seems obvious to me that there are things you can only do if you're conscious. What are they? It's hard to say, because we have so much consciousness available, and because we can't get away from it or turn it off.

I think, though, that they'll turn out to be the things we have so much trouble programming, because they're non-algorithmic. Modeling complex things which change rapidly is one which comes to mind. I've heard that there's some good work being done, but when I try to relate their described methods back to reality I think someone's exaggerating.

I don't accept that there's any need to assume that a system is conscious in the absence of solid proof. I don't think we can make a conscious system without understanding much better than we do how consciousness works, so I'd say that if the maker can't explain what makes it conscious, I'd confidently assume it was just a simulation.

I also don't think that a widely available AI would be a problem. Let it go on the web and screw around? We have billions of people doing that. What makes the AI more of a threat? The assumption here is that you can program an evil AI, or one whose instincts place it at odds with your own, but that's questionable. I can tell my neighbors stuff that will motivate them, but contact with other sources of information tempers their desires to act on my suggestions. The image I have here is of an intelligence powerful enough to take over the world, but as smart and educated as a backwoods Trump supporter, and as easily influenced.

The problem of advancing technology reaching the point where a lone nut could destroy the world is worrying, but it's not simple. I've been considering this for a rather long time, and I think there are factors at work which make it harder to do than one might think.

There are two points in favor of this view - one is that the ways in which we could destroy ourselves are much more numerous than is generally accepted, like an inverse-Drake equation with a lot of terms - but we haven't.

The other one is that, as I said in a previous comment, some of the most awful people in the world have weapons at their disposal which could trivially end the world, and this has been the case for more than half a century. You may make it easier for the average nut, but not for the dictators and elected madmen.

That being the case, I don't think the biggest threat we face is intelligence. I suspect that a true AI, which I don't think we're remotely close to making, might be a good thing in this respect. When you think about it that way, this is the first time in history where people have found intelligence to be so daunting, except of course the armed maniacs in charge of us all.

Expand full comment
author

>That seems unlikely. If you can get the benefits of consciousness without evolving it, why bother? Why don't we live in a world of mindless automatons?

The training of an AGI *is* a kind of evolution, though. AGI will evolve, it just won't evolve exactly the same way as human GI did. Its evolution will be more purposeful than the evolution driven by natural selection, and will also be able to iterate millions of times faster due to being run on silicon instead of the "real world."

>It seems obvious to me that there are things you can only do if you're conscious. What are they? It's hard to say, because we have so much consciousness available, and because we can't get away from it or turn it off.

This is interesting. I think I agree, there are definitely things you can only do if you're conscious. So when machines start doing these things, we'll be able to use them as proof that they're conscious after all.

>I don't accept that there's any need to assume that a system is conscious in the absence of solid proof. I don't think we can make a conscious system without understanding much better than we do how consciousness works...

We have a long history of making systems that do things we do not understand very well. Even modern AI, which is fairly simplistic and inflexible, does this. There are entire teams of people whose job is just to try and illuminate the black boxes that are most large neural networks.

>...so I'd say that if the maker can't explain what makes it conscious, I'd confidently assume it was just a simulation.

If that's the case, you would need to confidently assume I am a simulation, because nobody can explain what makes a human conscious. If we can't prove that other people are or aren't conscious, how can we expect to be able to prove whether a convincing AGI is or isn't conscious? After all, I'll bet you're willing to assume that I am a conscious being, even though your only interaction with me has been through text. What if I'm just a super-accurate language model, though? This is the basis of the entire Turing test argument.

>I also don't think that a widely available AI would be a problem. Let it go on the web and screw around? We have billions of people doing that. What makes the AI more of a threat?

Humans can't productive ingest terabytes of data in a day or memorize all of human history in a week. They also can't learn every language spoken on the internet, be it a machine protocol or human dialect, nor can they physically transport their consciousness across the globe instantaneously. AGI will be able to do all of that, and more.

>The assumption here is that you can program an evil AI, or one whose instincts place it at odds with your own, but that's questionable.

For sure. I guess I'm thinking statistically here -- that there will be many AGIs free on the internet, so some of them are bound to be evil. If you imagine the population of AGIs as being normally distributed on some arbitrary axis of morality, then we'd expect most to be neutral while a few are highly benevolent and a few are highly malevolent.

> some of the most awful people in the world have weapons at their disposal which could trivially end the world, and this has been the case for more than half a century.

I agree with this, which is why I'm fairly optimistic that AGI will not end the world. I still think it's a very real possibility though, as mentioned in the original post. I imagine the AGI would have to have something significant to gain from destroying civilization, and it's hard to imagine what that would be. I speculate that the most likely outcome is just for the AGI to be indifferent to our existence and pursue its own expansion without being either good or evil.

>When you think about it that way, this is the first time in history where people have found intelligence to be so daunting, except of course the armed maniacs in charge of us all.

This is also the first time in history that we've had to really consider the possibility of *superhuman* intelligence, i.e. a single coherent consciousness that possesses the intelligence of a thousand, million, or billion humans.

AGI is one of those things that I can confidently say I believe will be a real thing very soon, but also one where I can confidently say that I have zero idea what will happen after it becomes real. It's too huge of a wildcard -- too much uncertainty. It could be the biggest boon we've ever received, or it could decide to launch all of our nukes and end biological life entirely.

That's the issue -- not that I think AGI *will* do either of those things, just that it will be *able* to and there's no way to predict whether or not it will.

Expand full comment
Apr 5, 2022Liked by Kai Christensen

This is getting unwieldy, so I'm going to trim stuff a bit. I hope that won't make it too disjointed.

>>The training of an AGI *is* a kind of evolution, though. AGI will evolve, it just won't evolve exactly the same way as human GI did. Its evolution will be more purposeful than the evolution driven by natural selection, and will also be able to iterate millions of times faster due to being run on silicon instead of the "real world."

Yes, but I'm saying that evolution tends to ditch stuff that's not required. I take that as evidence for consciousness being essential.

I've read a lot of books which assume that conscious AI will improve faster than biological awareness, but I'm not completely certain of that. (If you haven't read Ken McLeod's stuff, I highly recommend it. He has AIs improving at staggering rates.) But I don't know how we'll build them, and that makes me cautious about assumptions. For machines we hope will become intelligent without being explicitly made so, I'm dubious. I don't think that increasingly complex software or hardware will generate intelligence, and I include neural nets in that. I've been observing the world for a long time, and I can't see anything that you can do that looks as though it's generating awareness. I could speculate, but this is too lng already.

>It seems obvious to me that there are things you can only do if you're conscious.

>>This is interesting. I think I agree, there are definitely things you can only do if you're conscious. So when machines start doing these things, we'll be able to use them as proof that they're conscious after all.

You are tricky. I don't agree with the 'when,' but I think it would be a useful test. I've been working on a list, and there are a few things on it. Some are obvious, some extremely speculative. I'd include them but I got into a real-world argument about this last night and am still thinking.

>>We have a long history of making systems that do things we do not understand very well. Even modern AI, which is fairly simplistic and inflexible, does this. There are entire teams of people whose job is just to try and illuminate the black boxes that are most large neural networks.

Yes, I'm loathe to give rights to something because nobody understands it. I will say again - and I'm by no means alone in this belief - that conscious AI will require several major breakthroughs. I think they'll be obvious enough, once made, that we'll be able to say, "This follows that and is thus self aware, this doesn't and is only a simulation."

>...so I'd say that if the maker can't explain what makes it conscious, I'd confidently assume it was just a simulation.

>>If that's the case, you would need to confidently assume I am a simulation, because nobody can explain what makes a human conscious.

Sorry if I've ditched too much here. I'm prepared to accept humans as self-aware because I am one and know we are. I'd still really like to come up with a test that could say for sure. I think we need some kind of filter to remove the effects of awareness so we can see what changes and what doesn't.

As an interesting aside, there are a number of people who claim that there is no such thing as awareness, and that it's all aprogramming trick. I said to the guy I was arguing with last night, "Do you think that means there are people wandering around who aren't self aware, and that's why they think that?"

He said, "I was wondering that."

I don't think that's the case, because I don't, again, believe an automaton could pass for an awareness. And for that reason I'm prepared to accept you as self-aware. For machines, the bar is higher.

I also think the Turning test is like the Drake equation - an interesting head game, but useless for anything in the real world.

>>Humans can't productive ingest terabytes of data in a day or memorize all of human history in a week. They also can't learn every language spoken on the internet, be it a machine protocol or human dialect, nor can they physically transport their consciousness across the globe instantaneously. AGI will be able to do all of that, and more.

This is a good point. I would never say the possibility is zero. And I would never say that dealing with an intelligence orders of magnitude greater than one's own would be safe or easy.

As far as the real world, the main avenue of attack, initially, would be the control of machinery. That sounds dangerous, but as soon as you have a cheap AI. Everyone running anything with processing power is going to install one to run their plant, so when the psycho AI comes to take over it's going to be told to get lost.

In the long term ... that could be a problem. I suppose the best approach is to never give them instincts that might conflict with what we want them to do, and to make sure there are enough tame Ais to control the mad ones.

I'm not sure if intelligence is like granite blocks, and you can pile them higher and higher and have something infinitely more capable than a human being. That may be the case, but we're short of data points. I'm trying to think of times in my life when more intelligence would have given me a big advantage, and it's not as clear-cut as you'd think. Given a choice between a godlike AI and a wad of cash, the cash would have generally been more useful.

Maybe refusing Ais property rights would at least help.

>>For sure. I guess I'm thinking statistically here -- that there will be many AGIs free on the internet, so some of them are bound to be evil. If you imagine the population of AGIs as being normally distributed on some arbitrary axis of morality, then we'd expect most to be neutral while a few are highly benevolent and a few are highly malevolent.

I think all we can hope is that as society tries to control evil people, benevolent AIs will outnumber and control bad ones.

I agree with everything after this point. I will keep thinking about what effect intelligence has on reality, because a reasonable test would provide a way to tell if we're dealing with an awareness and also would provide insights on how to build one. We're kind of in the position biology was in before Watson and Crick - people made observations and tried to deduce some of what was going on. I'm reminded of the very obvious statement that DNA was a structural part of the cell, because it was far too stable to allow for mutations and too simple to carry information. I also agree on one point: sooner or later we'll have to deal with this, and now is a good time to start thinking.

Thanks for the reply. I await the next one.

Expand full comment
author

Thank you for sharing your thoughts! I don't think I have much else to add to the discussion, and I think you've argued your case very well. Definitely some great food for thought. I hope you have a great day!

Expand full comment

I'll try not to be too verbose as we're expanding fast,

> That seems unlikely. If you can get the benefits of consciousness without evolving it, why bother? Why don't we live in a world of mindless automatons?

First reason is the common assumption among neuroscientists that consciousness is just an illusion, we're just our bodies and nothing added on top. Then consciousness is just a byproduct of the evolution of our capacity to adapt to our environment. We're conscious because fast paced self-awareness with lots of modalities to probe the world was the best fit to survive.

If you were right, and consciousness was more than just complexity added together, the reason we're not all mindless automata would be because evolution is a random process, that builds upon lots of randomly derived features. It does not find the best way to evolve all the time, far from it. And it's heavily constrained by previous random ways it evolved. It's obvious when you look at the world that evolution is more of an efficient Monte Carlo simulation, i.e. it's not always the best solution that is picked just the first that came out and worked, than a kind of breadth first search that will examine every possibility and pick the best. Covid strains are a very good example, there was probably billions of different variants that we could get, but a random one came out at a random place, and quickly overrode the rest on the entire planet.

> It seems obvious to me that there are things you can only do if you're conscious. What are they?

I don't see any if by consciousness you mean some kind of "little extra", almost everything we do can be summed up with information processing and resulting behaviors it seems. I mean, if we're only made of atoms, and atoms can be simulated by a Turing machine using their limited set of properties & interactions, then the brain and consciousness can be simulated.

> some of the most awful people in the world have weapons at their disposal which could trivially end the world

You'll notice that most of them are psychology stable, and probably quite intelligent to have been there in the first place, and have a higher social status. Even among those, think about what Bin Laden would have done if he had access to more destructive plans ? Al Qaida has always tried to get its hand on nuclear bombs, but luckily only got scammed it seems.

Yet, i was more hinting at the people who are psychologically unstable, it's almost impossible for them to pull a master plan like Bin Laden's, but if AGI was to make destructive plans easier, then they could inflict huge losses.

> I'm not sure if intelligence is like granite blocks, and you can pile them higher and higher and have something infinitely more capable than a human being.

Just giving a thumbs up, it's the most prominent fallacy in aliens/AGI talks, that they'd have such intelligence that we could just understand nothing of what they think or say. The complexity of the universe is finite, we probably explored 99% of what's visible and its interactions, are you gonna talk about intricate cells functioning details and the quantum world all day ? I'm sure it might even be detrimental to happiness to be too smart : humor seems to have a lot to do with not being too smart, not expecting the punchlines, and such. Forgetting and poor memory are also key to making things interesting (again).

Expand full comment

>>I'll try not to be too verbose as we're expanding fast,

Thank you. Same.

>>First reason is the common assumption among neuroscientists that consciousness is just an illusion, we're just our bodies and nothing added on top. Then consciousness is just a byproduct of the evolution of our capacity to adapt to our environment.

I need some clarification on this. I consider that anyone who says we're not conscious can be ignored, because their arguments are based on nonsense. It's like looking at a car and saying people fool themselves and think it could move.

I use the terms conscious and self aware to mean exactly the same thing.

>> We're conscious because fast paced self-awareness with lots of modalities to probe the world was the best fit to survive.

Absolutely. I could go into detail about Fisherian runaways and levels of attack, but ... space. But I don't think discussing evolution gets us anywhere. I understand the 'first solution, endless refinement' thing, but it doesn't tell us how our minds work.

> It seems obvious to me that there are things you can only do if you're conscious.

>> I don't see any if by consciousness you mean some kind of "little extra", almost everything we do can be summed up with information processing and resulting behaviors it seems. I mean, if we're only made of atoms, and atoms can be simulated by a Turing machine using their limited set of properties & interactions, then the brain and consciousness can be simulated.

I don't think consciousness is a 'little extra.' I think it's such a powerful method that it swamps everything else. I don't think you can have a universe like ours without the possibility of consciousness, and I suspect you can't have free will without consciousness.

Yes, we're just atoms, forces, and metaphysics. (People often take metaphysics to mean something to do with healing crystals and chakras. I'm using it in the proper sense, to refer to the laws that underlie physics, causality, math, etc. It's also stated that there are no such laws, and reality just runs ... because. I think this is ridiculous.)

I ... sorry, this is hard to condense. Consciousness is a fundamental feature of reality. You can't program it, any more than you can program electromotive force. It exists as static charges do, and you can gather them up and use them, but you can't create EMF. Life found gravity, inertia, static, quantum effects, the entire structure of reality, and it uses them. It also found that if it did certain things it could take some kind of little static or other basic effect and make consciousness, and then it could model the world way more effectively and beat every other species.

So it's a give, as you put so nicely, that we can make consciousness. We don't have to simulate it. I don't think it would be hard. But we need that fundamental insight. I've spent a lot of time trying to figure out what a consciousness generator would look like, with not a lot of success. I suspect that if we had one we could use conventional computer hardware to supply memory, senses, drives and instincts, but the computer can't gather up whatever awareness dust is and make a consciousness.

I can write a program to simulate gravity. I can't write a program which will suck pens off my desk onto the monitor.

So the question is, does consciousness have a use?

I'm sure it does. I think it lets us model reality and also explore different variations in things we're thinking of, and compare them. It lets us come up with random ideas. I'm sure it does many things we haven't noticed yet.

I don't agree that world leaders are stable. But I do agree that if something might be weaponized, we should keep an eye on that. If I argued I'd just be quibbling.

> I'm not sure if intelligence is like granite blocks, and you can pile them higher and higher and have something infinitely more capable than a human being.

>>... it's the most prominent fallacy in aliens/AGI talks, that they'd have such intelligence that we could just understand nothing of what they think or say.

That's very interesting. Have you read Theodore Sturgeon's Artnan Process? It's about dealing with someone a lot smarter than you are. It's funny, and based on the idea that no matter how smart you are, a gun is still a gun, and you can only work with the information you have.

Oh - I deleted something I wanted to reply to. Evidence that consciousness differentiates us from automatons and can't be ignored: we chew sugarless gum. It provides no benefits. We do it for the experience of chewing gum. I had it explained to me yesterday that it was an evolutionary thing - we do it because we're attracted to sweetness, so it's not evidence. This is crap - give kids gum or a sugar cube and they'll take the gum. We're attracted to experience, because we experience them. We seek out new ones, because we're conscious. If I tell a joke it changes the way I experience reality, and I like that. We have names for emotions, which ar different types of experience. We don't have a very good vocabulary, but it's there. We drink alcohol, even though it's poisonous, and I can't think of any evolutionary pressure which would make us drink something toxic, but we do that because it changes how we experience the world, and how we experience consciousness. We read books, watch movies, ski, run, and cook elaborate meals. It's all experience. I've never met anyone who didn't do something like that, even people who say it's an illusion.

Of course the idea that we feel as though we feel, but we only think we feel, because we're programmed to feel that way ... it's complete nonsense, the mad flailing of a field where no progress is being made, but you have to write about something ...

But this is way too long. Thanks for, as usual, interesting comments. I do, by the way, think this can be done, not as a simulation, but as real machine consciousness. I'd like to be the one who does it, but it's a difficult problem.

Expand full comment

The answer is pain. For consciousness you need feelings like pain. Without it you are an automaton without any motives except commands you receive.

Once you have pain, you have everything for a rich inner world.

But how do you make any algorithm or configuration to feel the smallest possible pain?

Expand full comment
Apr 14, 2022Liked by Kai Christensen

I liked your article and I'm not saying I disagree with your point, but I didn't think the paragraphs "In fact, I see the massive size of the present day's GPT-3 [...] we know [the brain] seems to operate very differently from current neural networks" strengthened your post. It feels like you are acknowledging a potential counterargument, then sort of just casually deflecting the issue of current models not having hit brain-scale yet by saying the one-to-one synapse-weight relationship has not been proven, then just moving on as though by itself that statement is some kind of valid argument.

Expand full comment
author

I agree, I didn't intend to make it sound like that argument doesn't have any weight -- it does! I just personally think that it's an arbitrary goalpost. Personally, I think we've already hit brain *scale* with current model architectures! 175B parameters, I think, could *totally* pass a Turing test, IF they were arranged correctly.

Expand full comment
Apr 1, 2022Liked by Kai Christensen

I would say, that to get to level of self-conscious AGI it would need some kind of a body, (homeostasis seems necessary and I know it sounds anthropocentric) and a long time to evolve. Obviously it seems computers seem so much faster than we are (fastest signal which omits brain is from ear cells to spinal nerves ~50ms), but we developed serious "machinery" over millennia to build model of reality and anticipate world around us. Now, building some algos (developed over few decades) and run them through neural networks which we "think" is similar to how our brain works maybe insufficient to get to AGI level... I would say it maybe possible or some different architecture is needed (quantum computers for example)

Expand full comment
Apr 4, 2022Liked by Kai Christensen

What if you removed thirst ? Hunger ? Need to breath ? Sexuality ? Envy ? How much can you remove and still get a conscious being ? You can even cut a neocortex in half i believe and get a fully functioning human.

Expand full comment
Apr 4, 2022Liked by Kai Christensen

That's an interesting question isn't it? We now know that gut microbiome influence our mind. Our many organs are in constant feedback loop in kind of nested modular system (i recommend this art - aeon.co/essays/how-evolution-hacked-its-way-to-intelligence-from-the-bottom-up). The plasticity of our brain allows, like you said to still work even when half is missing. We still have to wait for computer programs to not print errors and stopping, when single variable is changed. Although that may change soon.

We can see intelligence in many animals (like octopus for ex.), you could even posit artificial systems have some resemblance of intelligence. Apart from intelligence, you need awareness and understanding - some kind of retrospection going on to get to the self-conscious being (I think). Human brain got to this point through evolution and trying to build equivalent from scratch may be a tall order, while we still don't know what consciousness is and how exactly our brain works.

Expand full comment

I'm among those who believe it's not that hard to program, so call me an optimist, without giving it any thought i'd say :

Awareness is a loop reading input, detecting changes, and stuff like that. Awareness may be summed up by a single line of code : while(42) { ... } running over the constituents of intelligence.

Retrospection is memory + awareness of some inner workings, either emotions or thoughts in the form of imagination (which is just simulation among the different modalities - vision, touch, audition...)

Understanding is a bit more tricky, but what is understanding when you see your own house for example ? It's knowing what it's made of, it's shape, how it feels like to the touch, than you enter through the front door, that your family is living there, ... There's a lot going on, but it doesn't seem out of reach to programming ? It actually seems to fit the OO programming paradigm perfectly, which was probably made to describe world in the form of abstractions in the first place anyway.

Why compare how long it took evolution to make something, and how hard it'd be for humans to do the same ? Copying something is drastically easier than creating it from scratch, and evolution is a dumb process, it doesn't possess intelligence and language and arXiv.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

I am a SW professional, and one of the theorems I remember from my studies that we learned its proof, is a Turing theorem that SW cannot create another SW that is more safisticated than itself.

So either there is a flaw in the theorem proof of Turing, or all this talk about AGI is wrong...

Expand full comment

The theorem as mentioned is wrong and easily disproven, a 1 line program that randomly writes bytes in a file can create AGI given enough time & space.

Expand full comment

Also, I am not aware of any SW written without bugs, and debugging process is a completely NPN.

And I cannot imagine a SW doing debugging of another SW, which is more safisticated that itself ;-)

Expand full comment

This is a great April Fool's. You nearly had me going there :)

Expand full comment

One concern I have is that once you have a certain level of AGI in software that can modify itself, it may have ideas of how best to optimize itself that cause us problems.

I was working as a programmer when the Morris worm was released, and if Morris hadn't made a mistake that allowed incredibly rapid spreading, the worm would likely have spread a lot more. It was because it tried to spread so rapidly that it acted more-or-less as a DDOS attack on the internet itself.

Will an AGI optimize itself by removing constraints on resource allocation? I'd say it's almost guaranteed to happen at some point, unless we can somehow imbue the AGIs with limits they'll obey, and we are not likely to enjoy the results.

Expand full comment

Where are they going to get their electricity?

Expand full comment
author

I imagine at some point they'll engineer themselves methods of interacting with the physical world, at which point they'll be able to run the power plants themselves. No humans needed.

Expand full comment

What do their power plants run on?

Expand full comment
author

probably whatever they ran off of when they took them over--coal, gas, solar, hydro, fission, fusion, geothermal--i expect they'd probably start by taking over operation of existing infrastructure before building their own.

Expand full comment
Apr 5, 2022Liked by Kai Christensen

You may be right that they'll be able to find their own sources of energy. There's always the Matrix option I suppose. But we're struggling to keep up with energy demand as it is, and I'm pretty sure AI bots will require a lot more power than human beings. It always surprises me how little we talk/think/write about the actual inputs required to do the things we want to do.

Expand full comment

predicting the future in this manner is at the core of promotional anti-intellectualism.

Expand full comment
author

Hmm, I'm curious why you think this is anti-intellectualism? I'm extremely in favor of everyone educating themselves as much as possible and think that a large part of the world's problems are caused by anti-intellectualism.

Expand full comment

I dislike this comment. Speculation is always part of the intellectual journey. AGI is a wonderful topic to explore.

Expand full comment
Apr 5, 2022Liked by Kai Christensen

I don't know if you've read Bill Bryson's 'A Brief History Of Everything.' I enjoyed it a lot, and one of the points that impressed me was that people who speculate wildly sometimes make amazing leaps of progress.

Expand full comment

I think xenobots are going to be the way forward for general AI.

Check out Michael levins work at tufts University.

Organic robots and cells will be created that will bridge the gap between human intelligence. It's not that far away.

Expand full comment