The Singularity Rises
November 2024 › Forums › General discussion › The Singularity Rises
- This topic has 12 replies, 3 voices, and was last updated 11 years, 3 months ago by LBird.
-
AuthorPosts
-
August 6, 2013 at 9:38 am #82221Young Master SmeetModerator
https://theconversation.com/japanese-supercomputer-takes-big-byte-out-of-the-brain-16693
Quote:Researchers in Japan have used the powerful K computer, the world’s fastest supercomputer, to simulate the complex neural structure of our brain.Using a popular suite of neuron simulating software called NEST, the K computer is able to pull together the power of 82,944 processors to create a network simulating 1.73 billion nerve cells connected by 10.4 trillion synapses – approximating about 1% of the raw processing power of a human brain.
The singularity is reckoned to arrive at the point at which we create greater than human intelligence, and brain simulation (and super rapid evolution of brain simulations) is reckoned to be one way in which this will come about. If a computer big and bad enough could simulate an entire brain, that is.
Obviously, this is a long way from that, but still, it is a significant step. With Quantum computing looming ever nearer:
https://theconversation.com/quantum-computers-coming-to-a-store-near-you-16320
http://io9.com/scientists-freeze-light-for-an-entire-minute-912634479
This capacity comes closer to reality.
Of course, this opens up the door to concepts such as advanced computers planning the economy, and using artificial minds for all sorts of expert systems (who needs a lawyer when you have an AI hooked up to a database that can give an opinion in seconds)?
August 6, 2013 at 9:52 am #95279LBirdParticipantYoung Master Smeet wrote:…we create greater than human intelligence, and brain simulation…Isn't this really a philosophical problem, rather than a computing problem?That is, we humans can't yet define 'intelligence', never mind 'duplicate' it!And surely 'brain simulation' is not simply the same as 'consciousness'?
YMS wrote:Of course, this opens up the door to concepts such as advanced computers planning the economy, and using artificial minds for all sorts of expert systems…The bourgeois dream! Expert systems!Then they can remove the brains out of those pesky proletarians, who have needs and desires that 'advanced computers' with their 'artificial minds' cannot even dream of!Oops… we need dreams for production… damn…
August 6, 2013 at 10:58 am #95280Young Master SmeetModeratorWell, it's a philosophical problem that might be solved by computing methods. Though the computer tech response is to say that the question of whether a computer can think is as uninteresting as asking whether a submarine can swim. After all, Bertrand Russell after 350+ pages didn't manage to prove 1+1=2 (he got to a partial proof, but never managed to define addition), but that doesn't stop us using maths in any case. A computer beat Gary Kasparov at chess (with, yes, the help of human programmers), so we know that 'intelligence-like' capabilities can be produced by computers, up to the point where we may get computer designers producing schematics of cars for robot factories to build.BTW, I am taken with Searles Chinese box argument, but a fully virtualised brain could by-pass the question of intentionality.
August 6, 2013 at 4:05 pm #95281ALBKeymasterThis is not my field of interest (but I know it's YMS's). A holiday has provided me with a chance to read copies of the Skeptical Inquirer whichI i subscribe to. The November/December 2012 issue carried an article by Massimo Piglucci. a philosopher of science who writes a regular "Thinking About Science" column, entitled "Singularity As Pseudoscience". Naturally it caused a controversy. His basic argument seems to be that the theory of Singularity is based on the theory that the mind is like a computer, i.e that the human "intelligence" is "a function of speed of calculation and storage capacity":
Quote:… the whole idea of being able to upload one's consciousness [to a computer] assumes a strong — and not at all validated — version of the computational theory of mind. But that theory is, ironically, a flagrant example of dualism, because it separates what Descartes would have called res extenta (mere matter) from the res cogitsnds (thinking stuff), the latter being defined entirely in terms of logical symbols. There is no reason to believe that that's the way consciousness arises, and there are good reasons to think that it is instead a biological process, tightly linked to other biological processes and substrates typical of the kind of animal we are.and
Quote:Moreover, if the Singulatarians were to actually get what they wished, they would likely find themselves in a self-made hell. Human psychology evolved alongside a body capables of sensations, emotions and so on — not just pure thought. An entirely formal symbolic consciousness (whatever that might mean) would be nothing like a human being and would experience the world much differently than we do.I don't know if there is anything in this criticism? Anyway, it seems a long way off.
August 6, 2013 at 4:10 pm #95282LBirdParticipantYoung Master Smeet wrote:Well, it's a philosophical problem that might be solved by computing methods.The 'techies' answer!
YMS wrote:Though the computer tech response is to say that the question of whether a computer can think is as uninteresting as asking whether a submarine can swim.Isn't it just!
YMS wrote:After all, Bertrand Russell after 350+ pages didn't manage to prove 1+1=2 (he got to a partial proof, but never managed to define addition), but that doesn't stop us using maths in any case.Ahh, 'using'! The instrumental key to the universe!
YMS wrote:A computer beat Gary Kasparov at chess (with, yes, the help of human programmers),…With the 'help' of what? A 'computer' needing help? Strange concept, with them being so 'intelligent'…
YMS wrote:…so we know that 'intelligence-like' capabilities can be produced by computers…Riiiiight… so 'intelligence' is… 'playing chess'… with… the… help… of… humans……hmmm… seems to be 'humans' involved in all the definitions, so far…
YMS wrote:…up to the point where we may get computer designers producing schematics of cars for robot factories to build.So,… the human 'designers' produce… and the robots do the donkey work…Obviously, these 'robots' have even less 'intelligence' than the 'computers'!
YMS wrote:BTW, I am taken with Searles Chinese box argument, but a fully virtualised brain could by-pass the question of intentionality.You'll have to bring out the relevance of this for communists, comrade. I'm in the dark.
August 7, 2013 at 9:00 am #95283LBirdParticipantYoung Master Smeet wrote:BTW, I am taken with Searles Chinese box argument, but a fully virtualised brain could by-pass the question of intentionality.I've had a brief look at this issue, and I'm struck by two of its ideological starting points:a) brain = mind;b) the 'individualist' context of the interactions.Discussions about thinking, intelligence, consciousness and intentionality are related to 'mind'. 'Mind' is a social category, not a biological one, so searching for those characteristics in a 'brain' would be like searching for 'speed' in a statue of the spirit of ecstacy on the bonnet of a stationary rolls-royce.Only a society that values 'individuals' and 'geniuses' would see 'a brain' as a starting point for these researches. For this type of society, 'intelligence' is some phenomenon in individuals, rather than a social product.If, however, 'intelligence' is regarded as seated in society, the only way to create artificial intelligence would be to create a suitable society, rather than a 'brain'.In this sense, we could regard humanity's creation of a communist society as the supreme act of producing an artificial intelligence ('artificial' in the sense of something which doesn't exist 'naturally', but which must be consciously crafted by humans).If this line is taken, an AI scientist must be a communist to conduct serious research, regarding its materials, theories, purposes, aims, etc.If an AI scientist uses bourgeois science, with its ideological assumptions, in my opinion they might as well be making mud pies and be trying to converse with them.This is all off my head – what do other comrades think?
August 7, 2013 at 9:37 am #95284Young Master SmeetModeratorALB wrote:The November/December 2012 issue carried an article by Massimo Piglucci. a philosopher of science who writes a regular "Thinking About Science" column, entitled "Singularity As Pseudoscience".Well, that is one strong critique (certainly against the 'Rapture of the Nerds' end of the spectrum). But singularity also includes the possibility of human augmented intelligence, or network emergence.A couple of examples. Cricket: despite the ashes referral issues, one effect of technology has been to radically alter LBW calls. For years, human eye umpiring was giving not out to balls that Hawkeye proved were actually plumb. So, now the humans have responded by learning what a real LBW looks like, and are calling it better. Likewise, I've used this before, but early Twentieth Century chess masters were apparently blunder prone, and missed lines that are obvious to the current generation, who have been trained and schooled with computer chess programmes that can show them the deep outcomes of their choices.The point isn't that an artificial person might be created, but that intelligence-like activities, such as lawyering, or designing bridges, could be computerised and could be better than the human version.
August 7, 2013 at 9:47 am #95285Young Master SmeetModeratorJust to explain to people what the Chinese room is:http://en.wikipedia.org/wiki/Chinese_room
Searle wrote:"Suppose that I'm locked in a room and … that I know no Chinese, either written or spoken". He further supposes that he has a set of rules in English that "enable me to correlate one set of formal symbols with another set of formal symbols," that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in written Chinese, the computer executing the program would not understand the conversation eitherSo, this is a refutation, more or less, to the Turing test (in which an AI passes if it can convince a human it is a human in conversation). The section on the wikipedia article called "Brain simulation and connectionist replies: redesigning the room" is directly relevent to this discussion. I find Searle interesting, but I find compelling the idea of some neuroscientists that intention in the human mind in retrospective (personally, I think as a result of language and our 'social' mind, in which we feel we are explaining ourselves to our fellows). But I do have some sympathy with the idea that only a physical entity structured like a human brain can actually produce human consciousness/intelligence.
August 7, 2013 at 9:53 am #95286Young Master SmeetModeratorLBird wrote:So,… the human 'designers' produce… and the robots do the donkey work…Sorry, my fault, 'computer designers' = computer based robots, as opposed to physical robots. The idea being that a computer could design a bridge, plan the project, place the requisitions for parts, and co-ordinate the physical robots to build the bridge.As for helping Deep Blue: the computer programmers reprogrammed it between matches, changing it's evaluations. So, it's true to say that it lacked an ability to learn.As I've linked to before, the Robot World Cup is worth watching:http://www.robocup2013.org/The level of computation to find and kick a ball is incredible.
August 7, 2013 at 9:59 am #95287LBirdParticipantYoung Master Smeet wrote:…intelligence-like activities, such as lawyering, or designing bridges…YMS wrote:…the idea that only a physical entity structured like a human brain can actually produce human consciousness/intelligence.I'm afraid I don't share your opinion that 'lawyering or designing' constitute 'intelligence' or its very distant cousin 'intelligence-like'.Further, you haven't defined 'intelligence', or said why you consider 'a human brain' produces this undefined entity.These are philosophical and thus ideological issues, not computing problems.If you personally approach these issues employing bourgeois constructs that you've been taught, I think that you'll go astray, comrade.
August 7, 2013 at 10:24 am #95288LBirdParticipantYoung Master Smeet wrote:But I do have some sympathy with the idea that only a physical entity structured like a human brain can actually produce human consciousness/intelligence.Well, let's take the example of, not 'structured like', but an actually 'human brain'.If a new-born baby was locked in a box on tubular life-support for the first 18 years of its life, with no human contact, would this mysterious entity named 'intelligence' be produced by or emerge from this brain?
August 7, 2013 at 10:42 am #95289Young Master SmeetModeratorWell, I won't try to define intelligence, largely because in computer terms, I'd suggest it isn't actually very interesting, and ends up being misleading. Intelligence like captures it better, because I would maintain that humans produce intelligence like behaviour, and mistake it for intelligence, much of which, as I said, is a by-blow of our linguistic capacities and our advanced orders of theory of mindhttp://en.wikipedia.org/wiki/Theory_of_mindSo, what I'm interested in is robots/computers that not only perform high order calculations, but can recognise real world objects (rapidly) and can make deep searches of vast databases and can not only retrieve data but find relevence and manipulate it to produce new data.Designing, say, an entire new model car, so that it would not just be efficient and cheap to produce, but also attractive to human beings, would be a very high order function.Even a computer that can drive a car safely (they exist at prototype stage now) is intelligence like, because it requires situational awareness of external objects, and some notion of how other drivers are going to react.These sorts of things, rather than an artificial personality (which is what many people, driven by the movies, mistake for AI).
August 7, 2013 at 11:01 am #95290LBirdParticipantYoung Master Smeet wrote:Well, I won't try to define intelligence, largely because in computer terms, I'd suggest it isn't actually very interesting, and ends up being misleading.Well, if the use of the term 'intelligence' is not 'interesting' and indeed is 'misleading', why don't these researchers use a different term, such as… errrmm… 'dumbness'?It wouldn't be because that would let the 'ideological cat' out of the bag, would it?
YMS wrote:So, what I'm interested in is robots/computers…Yes, I recognise that, comrade! They're very interesting – I worked in the computing profession for 20 years, so I'm with you on that.But why let your enthusiasm for 'computers' become ideologically-soiled by insisting on retaining the term 'intelligence' in your laudable efforts?
YMS wrote:These sorts of things, rather than an artificial personality (which is what many people, driven by the movies, mistake for AI).Yeah, spot on! So why add to the confusion by continuing to use the ideologically-loaded initials 'AI'?Why not use 'Artificial Dumbness'?Be a leader in your field, and start referring to 'AD', when discussing 'computers'!We desperately need comrades to engage with bourgeois science and its scientists, and question and defeat their misleading arguments. Our class must be conscious of the class basis of 'science', and challenge its social authority.
-
AuthorPosts
- You must be logged in to reply to this topic.