Belial666 wrote:JohnRoth wrote:The basic fact is that we really can't define what "intelligence" is
"Intelligence" is sufficient complexity/capability in any computing and control system where the system can successfully and reliably repurpose or even improve the efficiency of its programming by itself, usually to adapt to external requirements.
So you're going with the idea that "intelligence" is an emergent property of complexity. I think this is the same idea that's being flogged by the Singularity folks. I'll just say that I don't buy it, at least in the form Kurtzweil and company are pushing it.
Belial666 wrote:With the above definition, humans are borderline intelligent for now; they can program new functions into their system beginning with "balance" and "sensory pattern recognition", which leads to "walking" and "language" and eventually to things such as "philosophy" and "abstract math" BUT that reprogramming is kinda slow as it is based mostly on external stimuli and its reliability is low as its coding ends up being an absurdly complex and tangled mess because it adapts on every single stimulus (internal and external), not just beneficial ones. Also, humans have no direct access to their code for improvement purposes. That's a bad thing in many cases (you can't edit away your dislike for your boring work, or PTSD, or your smoking habits) but a good thing in others (the government can't make programmed assassins, corporations can't make corporate drones whose only goal in life would be to work all day with no pay and enjoy it).
Are you familiar with the phrase "this is so bad it isn't even wrong?" Both examples you cited are special-purpose systems, one of which (balance) isn't even in the central nervous system (aka the brain).
You also seem to be using the "programming" metaphor, which is what AI started out with in the 50s and 60s, and which ran into a blank wall - it's now pretty well accepted (except by a few old fogeys) that the way the brain works has almost nothing common with computers as we know them.
Belial666 wrote:JohnRoth wrote:Nor can we define "consciousness."
I think you mean "sentience". That's the capability for any system to perform self-analysis and/or self-improvement of its own programming, without being based on any new external stimuli for it - though it is usually based on past stimuli. This definition covers things such as knowing you exist, knowing whether you're intelligent or not, and directing your own programming's improvement (or lack thereof).
No, I said consciousness and I meant consciousness. That is, the ability to reflect on one's own internal processes. Calling it "sentience" is simply a verbal ploy to avoid the philosophical knots that contemplating consciousness creates.
Belial666 wrote:
It should be noted that a system can be intelligent without being sentient - that would be the case of several animals such as dogs and cats and dolphins, for example, or adaptive programming such as seen in the Honorverse in some cases.
Dogs are definitely conscious. The definitive test for first level consciousness is whether the critter can recognize itself in a mirror. Dogs pass this test easily.
Belial666 wrote:It should also be noted that you can be sentient without being intelligent - that would be the case of A.I.s who are self-aware but whose creators "locked" their core programming and capabilities from being edited to prevent such things as "going Skynet".
And this is just playing with words, IMNSHO.
If you want to look at some cutting-edge research in psychology, I'd suggest looking at "embodied cognition," or the idea that a lot of behavior is shaped by the body and has little or no input from the brain. Robotics is a good example: robots that are constructed with a central programming metaphor tend to have really bad characteristics when faced with something that isn't in their programming. Consider the difference between Honda's ASIMO and Boston Dynamic's Big Dog.